Test Report: KVM_Linux_crio 18063

                    
                      9a5d81419c51a6c3c4fef58cf8d1de8416716248:2024-02-29:33343
                    
                

Test fail (30/309)

Order failed test Duration
39 TestAddons/parallel/Ingress 155.27
48 TestAddons/parallel/NvidiaDevicePlugin 7.77
53 TestAddons/StoppedEnableDisable 154.22
165 TestIngressAddonLegacy/StartLegacyK8sCluster 287.16
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 80.74
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 99.44
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 0.23
211 TestMountStart/serial/RestartStopped 25.47
223 TestMultiNode/serial/RestartKeepsNodes 680.56
225 TestMultiNode/serial/StopMultiNode 142.24
232 TestPreload 275.63
240 TestKubernetesUpgrade 373.1
312 TestStartStop/group/old-k8s-version/serial/FirstStart 293.25
338 TestStartStop/group/embed-certs/serial/Stop 138.9
340 TestStartStop/group/no-preload/serial/Stop 138.93
343 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.73
344 TestStartStop/group/old-k8s-version/serial/DeployApp 0.53
345 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 93
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
347 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
354 TestStartStop/group/old-k8s-version/serial/SecondStart 781.95
355 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.65
356 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.55
357 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.68
358 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.42
359 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 373.04
360 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 336.31
361 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 89.8
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 125.58
x
+
TestAddons/parallel/Ingress (155.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-600097 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-600097 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-600097 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d714fecf-09b1-4cd0-b639-1b12d34e13b3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d714fecf-09b1-4cd0-b639-1b12d34e13b3] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004899303s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-600097 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.171106182s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-600097 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.181
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-600097 addons disable ingress-dns --alsologtostderr -v=1: (1.765607502s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-600097 addons disable ingress --alsologtostderr -v=1: (7.999124368s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-600097 -n addons-600097
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-600097 logs -n 25: (1.369696138s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-561532                                                                     | download-only-561532 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-425270                                                                     | download-only-425270 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-057025                                                                     | download-only-057025 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-561532                                                                     | download-only-561532 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-801156 | jenkins | v1.32.0 | 29 Feb 24 01:12 UTC |                     |
	|         | binary-mirror-801156                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39823                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-801156                                                                     | binary-mirror-801156 | jenkins | v1.32.0 | 29 Feb 24 01:12 UTC | 29 Feb 24 01:12 UTC |
	| addons  | enable dashboard -p                                                                         | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:12 UTC |                     |
	|         | addons-600097                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:12 UTC |                     |
	|         | addons-600097                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-600097 --wait=true                                                                | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:12 UTC | 29 Feb 24 01:14 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:14 UTC | 29 Feb 24 01:14 UTC |
	|         | addons-600097                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-600097 ssh cat                                                                       | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:14 UTC | 29 Feb 24 01:14 UTC |
	|         | /opt/local-path-provisioner/pvc-46cdb420-a06c-4c86-b1c5-0196b03f5f20_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-600097 addons disable                                                                | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:14 UTC | 29 Feb 24 01:15 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-600097 ip                                                                            | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:14 UTC | 29 Feb 24 01:14 UTC |
	| addons  | addons-600097 addons disable                                                                | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:14 UTC | 29 Feb 24 01:14 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-600097 addons disable                                                                | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:14 UTC | 29 Feb 24 01:14 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-600097 addons                                                                        | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:15 UTC | 29 Feb 24 01:15 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:15 UTC | 29 Feb 24 01:15 UTC |
	|         | addons-600097                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-600097 ssh curl -s                                                                   | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:15 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:15 UTC |                     |
	|         | -p addons-600097                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:15 UTC | 29 Feb 24 01:15 UTC |
	|         | -p addons-600097                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-600097 addons                                                                        | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:15 UTC | 29 Feb 24 01:15 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-600097 addons                                                                        | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:15 UTC | 29 Feb 24 01:15 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-600097 ip                                                                            | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:17 UTC | 29 Feb 24 01:17 UTC |
	| addons  | addons-600097 addons disable                                                                | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:17 UTC | 29 Feb 24 01:17 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-600097 addons disable                                                                | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:17 UTC | 29 Feb 24 01:17 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:12:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:12:00.561663  324746 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:12:00.561761  324746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:12:00.561773  324746 out.go:304] Setting ErrFile to fd 2...
	I0229 01:12:00.561777  324746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:12:00.561988  324746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 01:12:00.562647  324746 out.go:298] Setting JSON to false
	I0229 01:12:00.563639  324746 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3264,"bootTime":1709165857,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:12:00.563713  324746 start.go:139] virtualization: kvm guest
	I0229 01:12:00.565707  324746 out.go:177] * [addons-600097] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:12:00.567009  324746 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:12:00.567034  324746 notify.go:220] Checking for updates...
	I0229 01:12:00.568400  324746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:12:00.569675  324746 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:12:00.570788  324746 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:12:00.571930  324746 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:12:00.572967  324746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:12:00.574209  324746 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:12:00.605793  324746 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 01:12:00.606889  324746 start.go:299] selected driver: kvm2
	I0229 01:12:00.606903  324746 start.go:903] validating driver "kvm2" against <nil>
	I0229 01:12:00.606915  324746 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:12:00.607606  324746 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:12:00.607700  324746 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:12:00.622814  324746 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:12:00.622869  324746 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:12:00.623101  324746 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 01:12:00.623191  324746 cni.go:84] Creating CNI manager for ""
	I0229 01:12:00.623207  324746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 01:12:00.623215  324746 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 01:12:00.623229  324746 start_flags.go:323] config:
	{Name:addons-600097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-600097 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:12:00.623423  324746 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:12:00.625069  324746 out.go:177] * Starting control plane node addons-600097 in cluster addons-600097
	I0229 01:12:00.626308  324746 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 01:12:00.626348  324746 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 01:12:00.626363  324746 cache.go:56] Caching tarball of preloaded images
	I0229 01:12:00.626465  324746 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 01:12:00.626477  324746 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 01:12:00.626777  324746 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/config.json ...
	I0229 01:12:00.626796  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/config.json: {Name:mk2e96a395af39f7672aec4cced3cd5fe3b7734b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:00.626930  324746 start.go:365] acquiring machines lock for addons-600097: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:12:00.626972  324746 start.go:369] acquired machines lock for "addons-600097" in 29.14µs
	I0229 01:12:00.626988  324746 start.go:93] Provisioning new machine with config: &{Name:addons-600097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-600097 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 01:12:00.627041  324746 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 01:12:00.628620  324746 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0229 01:12:00.628756  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:12:00.628800  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:12:00.643045  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33601
	I0229 01:12:00.643521  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:12:00.644120  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:12:00.644143  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:12:00.644472  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:12:00.644674  324746 main.go:141] libmachine: (addons-600097) Calling .GetMachineName
	I0229 01:12:00.644821  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:00.644962  324746 start.go:159] libmachine.API.Create for "addons-600097" (driver="kvm2")
	I0229 01:12:00.645006  324746 client.go:168] LocalClient.Create starting
	I0229 01:12:00.645049  324746 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem
	I0229 01:12:00.818492  324746 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem
	I0229 01:12:00.931117  324746 main.go:141] libmachine: Running pre-create checks...
	I0229 01:12:00.931147  324746 main.go:141] libmachine: (addons-600097) Calling .PreCreateCheck
	I0229 01:12:00.931722  324746 main.go:141] libmachine: (addons-600097) Calling .GetConfigRaw
	I0229 01:12:00.932176  324746 main.go:141] libmachine: Creating machine...
	I0229 01:12:00.932194  324746 main.go:141] libmachine: (addons-600097) Calling .Create
	I0229 01:12:00.932318  324746 main.go:141] libmachine: (addons-600097) Creating KVM machine...
	I0229 01:12:00.933659  324746 main.go:141] libmachine: (addons-600097) DBG | found existing default KVM network
	I0229 01:12:00.934422  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:00.934283  324768 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0229 01:12:00.939592  324746 main.go:141] libmachine: (addons-600097) DBG | trying to create private KVM network mk-addons-600097 192.168.39.0/24...
	I0229 01:12:01.006111  324746 main.go:141] libmachine: (addons-600097) DBG | private KVM network mk-addons-600097 192.168.39.0/24 created
	I0229 01:12:01.006171  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:01.006059  324768 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:12:01.006197  324746 main.go:141] libmachine: (addons-600097) Setting up store path in /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097 ...
	I0229 01:12:01.006254  324746 main.go:141] libmachine: (addons-600097) Building disk image from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 01:12:01.006293  324746 main.go:141] libmachine: (addons-600097) Downloading /home/jenkins/minikube-integration/18063-316644/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 01:12:01.266326  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:01.266157  324768 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa...
	I0229 01:12:01.418130  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:01.417973  324768 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/addons-600097.rawdisk...
	I0229 01:12:01.418180  324746 main.go:141] libmachine: (addons-600097) DBG | Writing magic tar header
	I0229 01:12:01.418192  324746 main.go:141] libmachine: (addons-600097) DBG | Writing SSH key tar header
	I0229 01:12:01.418199  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:01.418134  324768 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097 ...
	I0229 01:12:01.418219  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097
	I0229 01:12:01.418306  324746 main.go:141] libmachine: (addons-600097) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097 (perms=drwx------)
	I0229 01:12:01.418330  324746 main.go:141] libmachine: (addons-600097) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines (perms=drwxr-xr-x)
	I0229 01:12:01.418341  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines
	I0229 01:12:01.418357  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:12:01.418367  324746 main.go:141] libmachine: (addons-600097) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube (perms=drwxr-xr-x)
	I0229 01:12:01.418373  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644
	I0229 01:12:01.418384  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 01:12:01.418392  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home/jenkins
	I0229 01:12:01.418408  324746 main.go:141] libmachine: (addons-600097) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644 (perms=drwxrwxr-x)
	I0229 01:12:01.418418  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home
	I0229 01:12:01.418429  324746 main.go:141] libmachine: (addons-600097) DBG | Skipping /home - not owner
	I0229 01:12:01.418438  324746 main.go:141] libmachine: (addons-600097) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 01:12:01.418443  324746 main.go:141] libmachine: (addons-600097) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 01:12:01.418451  324746 main.go:141] libmachine: (addons-600097) Creating domain...
	I0229 01:12:01.419420  324746 main.go:141] libmachine: (addons-600097) define libvirt domain using xml: 
	I0229 01:12:01.419442  324746 main.go:141] libmachine: (addons-600097) <domain type='kvm'>
	I0229 01:12:01.419449  324746 main.go:141] libmachine: (addons-600097)   <name>addons-600097</name>
	I0229 01:12:01.419456  324746 main.go:141] libmachine: (addons-600097)   <memory unit='MiB'>4000</memory>
	I0229 01:12:01.419466  324746 main.go:141] libmachine: (addons-600097)   <vcpu>2</vcpu>
	I0229 01:12:01.419473  324746 main.go:141] libmachine: (addons-600097)   <features>
	I0229 01:12:01.419482  324746 main.go:141] libmachine: (addons-600097)     <acpi/>
	I0229 01:12:01.419495  324746 main.go:141] libmachine: (addons-600097)     <apic/>
	I0229 01:12:01.419500  324746 main.go:141] libmachine: (addons-600097)     <pae/>
	I0229 01:12:01.419504  324746 main.go:141] libmachine: (addons-600097)     
	I0229 01:12:01.419509  324746 main.go:141] libmachine: (addons-600097)   </features>
	I0229 01:12:01.419513  324746 main.go:141] libmachine: (addons-600097)   <cpu mode='host-passthrough'>
	I0229 01:12:01.419520  324746 main.go:141] libmachine: (addons-600097)   
	I0229 01:12:01.419524  324746 main.go:141] libmachine: (addons-600097)   </cpu>
	I0229 01:12:01.419542  324746 main.go:141] libmachine: (addons-600097)   <os>
	I0229 01:12:01.419578  324746 main.go:141] libmachine: (addons-600097)     <type>hvm</type>
	I0229 01:12:01.419591  324746 main.go:141] libmachine: (addons-600097)     <boot dev='cdrom'/>
	I0229 01:12:01.419598  324746 main.go:141] libmachine: (addons-600097)     <boot dev='hd'/>
	I0229 01:12:01.419608  324746 main.go:141] libmachine: (addons-600097)     <bootmenu enable='no'/>
	I0229 01:12:01.419618  324746 main.go:141] libmachine: (addons-600097)   </os>
	I0229 01:12:01.419627  324746 main.go:141] libmachine: (addons-600097)   <devices>
	I0229 01:12:01.419638  324746 main.go:141] libmachine: (addons-600097)     <disk type='file' device='cdrom'>
	I0229 01:12:01.419667  324746 main.go:141] libmachine: (addons-600097)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/boot2docker.iso'/>
	I0229 01:12:01.419704  324746 main.go:141] libmachine: (addons-600097)       <target dev='hdc' bus='scsi'/>
	I0229 01:12:01.419714  324746 main.go:141] libmachine: (addons-600097)       <readonly/>
	I0229 01:12:01.419729  324746 main.go:141] libmachine: (addons-600097)     </disk>
	I0229 01:12:01.419747  324746 main.go:141] libmachine: (addons-600097)     <disk type='file' device='disk'>
	I0229 01:12:01.419762  324746 main.go:141] libmachine: (addons-600097)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 01:12:01.419776  324746 main.go:141] libmachine: (addons-600097)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/addons-600097.rawdisk'/>
	I0229 01:12:01.419783  324746 main.go:141] libmachine: (addons-600097)       <target dev='hda' bus='virtio'/>
	I0229 01:12:01.419789  324746 main.go:141] libmachine: (addons-600097)     </disk>
	I0229 01:12:01.419798  324746 main.go:141] libmachine: (addons-600097)     <interface type='network'>
	I0229 01:12:01.419812  324746 main.go:141] libmachine: (addons-600097)       <source network='mk-addons-600097'/>
	I0229 01:12:01.419827  324746 main.go:141] libmachine: (addons-600097)       <model type='virtio'/>
	I0229 01:12:01.419836  324746 main.go:141] libmachine: (addons-600097)     </interface>
	I0229 01:12:01.419846  324746 main.go:141] libmachine: (addons-600097)     <interface type='network'>
	I0229 01:12:01.419856  324746 main.go:141] libmachine: (addons-600097)       <source network='default'/>
	I0229 01:12:01.419863  324746 main.go:141] libmachine: (addons-600097)       <model type='virtio'/>
	I0229 01:12:01.419878  324746 main.go:141] libmachine: (addons-600097)     </interface>
	I0229 01:12:01.419887  324746 main.go:141] libmachine: (addons-600097)     <serial type='pty'>
	I0229 01:12:01.419895  324746 main.go:141] libmachine: (addons-600097)       <target port='0'/>
	I0229 01:12:01.419908  324746 main.go:141] libmachine: (addons-600097)     </serial>
	I0229 01:12:01.419920  324746 main.go:141] libmachine: (addons-600097)     <console type='pty'>
	I0229 01:12:01.419929  324746 main.go:141] libmachine: (addons-600097)       <target type='serial' port='0'/>
	I0229 01:12:01.419940  324746 main.go:141] libmachine: (addons-600097)     </console>
	I0229 01:12:01.419948  324746 main.go:141] libmachine: (addons-600097)     <rng model='virtio'>
	I0229 01:12:01.419958  324746 main.go:141] libmachine: (addons-600097)       <backend model='random'>/dev/random</backend>
	I0229 01:12:01.419968  324746 main.go:141] libmachine: (addons-600097)     </rng>
	I0229 01:12:01.419984  324746 main.go:141] libmachine: (addons-600097)     
	I0229 01:12:01.420003  324746 main.go:141] libmachine: (addons-600097)     
	I0229 01:12:01.420012  324746 main.go:141] libmachine: (addons-600097)   </devices>
	I0229 01:12:01.420020  324746 main.go:141] libmachine: (addons-600097) </domain>
	I0229 01:12:01.420029  324746 main.go:141] libmachine: (addons-600097) 
	I0229 01:12:01.426127  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:0d:58:f8 in network default
	I0229 01:12:01.426959  324746 main.go:141] libmachine: (addons-600097) Ensuring networks are active...
	I0229 01:12:01.427006  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:01.427627  324746 main.go:141] libmachine: (addons-600097) Ensuring network default is active
	I0229 01:12:01.427955  324746 main.go:141] libmachine: (addons-600097) Ensuring network mk-addons-600097 is active
	I0229 01:12:01.428372  324746 main.go:141] libmachine: (addons-600097) Getting domain xml...
	I0229 01:12:01.428926  324746 main.go:141] libmachine: (addons-600097) Creating domain...
	I0229 01:12:02.777146  324746 main.go:141] libmachine: (addons-600097) Waiting to get IP...
	I0229 01:12:02.777835  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:02.778207  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:02.778266  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:02.778192  324768 retry.go:31] will retry after 258.133761ms: waiting for machine to come up
	I0229 01:12:03.037697  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:03.038126  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:03.038150  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:03.038076  324768 retry.go:31] will retry after 250.035533ms: waiting for machine to come up
	I0229 01:12:03.289431  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:03.289847  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:03.289877  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:03.289783  324768 retry.go:31] will retry after 440.875147ms: waiting for machine to come up
	I0229 01:12:03.732488  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:03.732880  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:03.732905  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:03.732842  324768 retry.go:31] will retry after 396.006304ms: waiting for machine to come up
	I0229 01:12:04.130600  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:04.131027  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:04.131054  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:04.130958  324768 retry.go:31] will retry after 599.846838ms: waiting for machine to come up
	I0229 01:12:04.732718  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:04.733175  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:04.733208  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:04.733111  324768 retry.go:31] will retry after 664.87235ms: waiting for machine to come up
	I0229 01:12:05.399846  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:05.400203  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:05.400226  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:05.400157  324768 retry.go:31] will retry after 876.719492ms: waiting for machine to come up
	I0229 01:12:06.278871  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:06.279255  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:06.279284  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:06.279205  324768 retry.go:31] will retry after 1.44982438s: waiting for machine to come up
	I0229 01:12:07.730844  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:07.731281  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:07.731332  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:07.731216  324768 retry.go:31] will retry after 1.582055103s: waiting for machine to come up
	I0229 01:12:09.315925  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:09.316413  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:09.316443  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:09.316336  324768 retry.go:31] will retry after 1.423644428s: waiting for machine to come up
	I0229 01:12:10.741772  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:10.742279  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:10.742322  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:10.742231  324768 retry.go:31] will retry after 2.206084184s: waiting for machine to come up
	I0229 01:12:12.951377  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:12.951792  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:12.951828  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:12.951745  324768 retry.go:31] will retry after 3.273018546s: waiting for machine to come up
	I0229 01:12:16.226625  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:16.227093  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:16.227118  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:16.227045  324768 retry.go:31] will retry after 3.33783935s: waiting for machine to come up
	I0229 01:12:19.567338  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:19.567773  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:19.567799  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:19.567733  324768 retry.go:31] will retry after 5.653686995s: waiting for machine to come up
	I0229 01:12:25.226351  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:25.226816  324746 main.go:141] libmachine: (addons-600097) Found IP for machine: 192.168.39.181
	I0229 01:12:25.226842  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has current primary IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:25.226848  324746 main.go:141] libmachine: (addons-600097) Reserving static IP address...
	I0229 01:12:25.227142  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find host DHCP lease matching {name: "addons-600097", mac: "52:54:00:2a:8d:58", ip: "192.168.39.181"} in network mk-addons-600097
	I0229 01:12:25.296958  324746 main.go:141] libmachine: (addons-600097) DBG | Getting to WaitForSSH function...
	I0229 01:12:25.296996  324746 main.go:141] libmachine: (addons-600097) Reserved static IP address: 192.168.39.181
	I0229 01:12:25.297011  324746 main.go:141] libmachine: (addons-600097) Waiting for SSH to be available...
	I0229 01:12:25.299611  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:25.299925  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097
	I0229 01:12:25.299951  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find defined IP address of network mk-addons-600097 interface with MAC address 52:54:00:2a:8d:58
	I0229 01:12:25.300094  324746 main.go:141] libmachine: (addons-600097) DBG | Using SSH client type: external
	I0229 01:12:25.300121  324746 main.go:141] libmachine: (addons-600097) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa (-rw-------)
	I0229 01:12:25.300197  324746 main.go:141] libmachine: (addons-600097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 01:12:25.300225  324746 main.go:141] libmachine: (addons-600097) DBG | About to run SSH command:
	I0229 01:12:25.300243  324746 main.go:141] libmachine: (addons-600097) DBG | exit 0
	I0229 01:12:25.303873  324746 main.go:141] libmachine: (addons-600097) DBG | SSH cmd err, output: exit status 255: 
	I0229 01:12:25.303894  324746 main.go:141] libmachine: (addons-600097) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0229 01:12:25.303902  324746 main.go:141] libmachine: (addons-600097) DBG | command : exit 0
	I0229 01:12:25.303915  324746 main.go:141] libmachine: (addons-600097) DBG | err     : exit status 255
	I0229 01:12:25.303922  324746 main.go:141] libmachine: (addons-600097) DBG | output  : 
	I0229 01:12:28.305749  324746 main.go:141] libmachine: (addons-600097) DBG | Getting to WaitForSSH function...
	I0229 01:12:28.307954  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.308294  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.308339  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.308369  324746 main.go:141] libmachine: (addons-600097) DBG | Using SSH client type: external
	I0229 01:12:28.308377  324746 main.go:141] libmachine: (addons-600097) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa (-rw-------)
	I0229 01:12:28.308442  324746 main.go:141] libmachine: (addons-600097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 01:12:28.308474  324746 main.go:141] libmachine: (addons-600097) DBG | About to run SSH command:
	I0229 01:12:28.308483  324746 main.go:141] libmachine: (addons-600097) DBG | exit 0
	I0229 01:12:28.434172  324746 main.go:141] libmachine: (addons-600097) DBG | SSH cmd err, output: <nil>: 
	I0229 01:12:28.434553  324746 main.go:141] libmachine: (addons-600097) KVM machine creation complete!
	I0229 01:12:28.434813  324746 main.go:141] libmachine: (addons-600097) Calling .GetConfigRaw
	I0229 01:12:28.435345  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:28.435539  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:28.435726  324746 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 01:12:28.435743  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:12:28.437108  324746 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 01:12:28.437123  324746 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 01:12:28.437129  324746 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 01:12:28.437157  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:28.439537  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.439995  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.440027  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.440170  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:28.440345  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.440511  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.440644  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:28.440794  324746 main.go:141] libmachine: Using SSH client type: native
	I0229 01:12:28.441024  324746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I0229 01:12:28.441039  324746 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 01:12:28.554036  324746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:12:28.554072  324746 main.go:141] libmachine: Detecting the provisioner...
	I0229 01:12:28.554080  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:28.557070  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.557374  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.557436  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.557624  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:28.557859  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.558055  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.558213  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:28.558388  324746 main.go:141] libmachine: Using SSH client type: native
	I0229 01:12:28.558558  324746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I0229 01:12:28.558569  324746 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 01:12:28.671881  324746 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 01:12:28.671998  324746 main.go:141] libmachine: found compatible host: buildroot
	I0229 01:12:28.672014  324746 main.go:141] libmachine: Provisioning with buildroot...
	I0229 01:12:28.672028  324746 main.go:141] libmachine: (addons-600097) Calling .GetMachineName
	I0229 01:12:28.672303  324746 buildroot.go:166] provisioning hostname "addons-600097"
	I0229 01:12:28.672340  324746 main.go:141] libmachine: (addons-600097) Calling .GetMachineName
	I0229 01:12:28.672553  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:28.675289  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.675693  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.675715  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.675870  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:28.676045  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.676183  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.676291  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:28.676454  324746 main.go:141] libmachine: Using SSH client type: native
	I0229 01:12:28.676623  324746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I0229 01:12:28.676636  324746 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-600097 && echo "addons-600097" | sudo tee /etc/hostname
	I0229 01:12:28.808885  324746 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-600097
	
	I0229 01:12:28.808928  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:28.811592  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.811911  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.811939  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.812164  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:28.812403  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.812576  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.812754  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:28.812926  324746 main.go:141] libmachine: Using SSH client type: native
	I0229 01:12:28.813108  324746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I0229 01:12:28.813127  324746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-600097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-600097/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-600097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:12:28.932636  324746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:12:28.932667  324746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 01:12:28.932728  324746 buildroot.go:174] setting up certificates
	I0229 01:12:28.932749  324746 provision.go:83] configureAuth start
	I0229 01:12:28.932764  324746 main.go:141] libmachine: (addons-600097) Calling .GetMachineName
	I0229 01:12:28.933099  324746 main.go:141] libmachine: (addons-600097) Calling .GetIP
	I0229 01:12:28.935796  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.936129  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.936153  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.936298  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:28.938710  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.939051  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.939088  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.939260  324746 provision.go:138] copyHostCerts
	I0229 01:12:28.939339  324746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 01:12:28.939489  324746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 01:12:28.939590  324746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 01:12:28.939662  324746 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.addons-600097 san=[192.168.39.181 192.168.39.181 localhost 127.0.0.1 minikube addons-600097]
	I0229 01:12:29.007124  324746 provision.go:172] copyRemoteCerts
	I0229 01:12:29.007202  324746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:12:29.007238  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:29.009932  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.010282  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.010313  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.010495  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:29.010711  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.010857  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:29.010991  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:12:29.098268  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 01:12:29.124562  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 01:12:29.149870  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 01:12:29.175067  324746 provision.go:86] duration metric: configureAuth took 242.303028ms
	I0229 01:12:29.175094  324746 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:12:29.175253  324746 config.go:182] Loaded profile config "addons-600097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:12:29.175330  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:29.177923  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.178279  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.178312  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.178553  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:29.178739  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.178921  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.179046  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:29.179199  324746 main.go:141] libmachine: Using SSH client type: native
	I0229 01:12:29.179403  324746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I0229 01:12:29.179425  324746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 01:12:29.479481  324746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 01:12:29.479513  324746 main.go:141] libmachine: Checking connection to Docker...
	I0229 01:12:29.479522  324746 main.go:141] libmachine: (addons-600097) Calling .GetURL
	I0229 01:12:29.480791  324746 main.go:141] libmachine: (addons-600097) DBG | Using libvirt version 6000000
	I0229 01:12:29.482899  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.483235  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.483263  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.483414  324746 main.go:141] libmachine: Docker is up and running!
	I0229 01:12:29.483425  324746 main.go:141] libmachine: Reticulating splines...
	I0229 01:12:29.483434  324746 client.go:171] LocalClient.Create took 28.83841574s
	I0229 01:12:29.483468  324746 start.go:167] duration metric: libmachine.API.Create for "addons-600097" took 28.838505881s
	I0229 01:12:29.483481  324746 start.go:300] post-start starting for "addons-600097" (driver="kvm2")
	I0229 01:12:29.483498  324746 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:12:29.483521  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:29.483760  324746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:12:29.483784  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:29.485744  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.486030  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.486074  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.486180  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:29.486380  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.486517  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:29.486667  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:12:29.573820  324746 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:12:29.578791  324746 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:12:29.578822  324746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 01:12:29.578926  324746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 01:12:29.578951  324746 start.go:303] post-start completed in 95.464729ms
	I0229 01:12:29.578990  324746 main.go:141] libmachine: (addons-600097) Calling .GetConfigRaw
	I0229 01:12:29.579637  324746 main.go:141] libmachine: (addons-600097) Calling .GetIP
	I0229 01:12:29.582100  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.582453  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.582487  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.582721  324746 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/config.json ...
	I0229 01:12:29.582937  324746 start.go:128] duration metric: createHost completed in 28.955884846s
	I0229 01:12:29.582968  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:29.585039  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.585329  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.585358  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.585479  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:29.585666  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.585801  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.585929  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:29.586080  324746 main.go:141] libmachine: Using SSH client type: native
	I0229 01:12:29.586267  324746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I0229 01:12:29.586287  324746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 01:12:29.699567  324746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709169149.676426879
	
	I0229 01:12:29.699599  324746 fix.go:206] guest clock: 1709169149.676426879
	I0229 01:12:29.699606  324746 fix.go:219] Guest: 2024-02-29 01:12:29.676426879 +0000 UTC Remote: 2024-02-29 01:12:29.582950342 +0000 UTC m=+29.067750154 (delta=93.476537ms)
	I0229 01:12:29.699627  324746 fix.go:190] guest clock delta is within tolerance: 93.476537ms
	I0229 01:12:29.699633  324746 start.go:83] releasing machines lock for "addons-600097", held for 29.072652544s
	I0229 01:12:29.699654  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:29.699960  324746 main.go:141] libmachine: (addons-600097) Calling .GetIP
	I0229 01:12:29.702461  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.702767  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.702800  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.702976  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:29.703520  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:29.703694  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:29.703800  324746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:12:29.703896  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:29.703962  324746 ssh_runner.go:195] Run: cat /version.json
	I0229 01:12:29.703989  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:29.706559  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.706829  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.706883  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.706907  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.707041  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:29.707224  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.707255  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.707278  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.707408  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:29.707430  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:29.707583  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.707590  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:12:29.707728  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:29.707862  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:12:29.787943  324746 ssh_runner.go:195] Run: systemctl --version
	I0229 01:12:29.812671  324746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 01:12:29.978111  324746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 01:12:29.984986  324746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:12:29.985042  324746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 01:12:30.001964  324746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 01:12:30.001989  324746 start.go:475] detecting cgroup driver to use...
	I0229 01:12:30.002047  324746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:12:30.017789  324746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:12:30.031876  324746 docker.go:217] disabling cri-docker service (if available) ...
	I0229 01:12:30.031939  324746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 01:12:30.045833  324746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 01:12:30.059910  324746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 01:12:30.183967  324746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 01:12:30.355461  324746 docker.go:233] disabling docker service ...
	I0229 01:12:30.355546  324746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 01:12:30.371759  324746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 01:12:30.386024  324746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 01:12:30.515464  324746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 01:12:30.644572  324746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 01:12:30.660416  324746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:12:30.680766  324746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 01:12:30.680833  324746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:12:30.692482  324746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 01:12:30.692585  324746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:12:30.703731  324746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:12:30.715441  324746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:12:30.727389  324746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:12:30.739774  324746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:12:30.750320  324746 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 01:12:30.750390  324746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 01:12:30.764582  324746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:12:30.775824  324746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:12:30.902471  324746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 01:12:31.053891  324746 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 01:12:31.053977  324746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 01:12:31.059293  324746 start.go:543] Will wait 60s for crictl version
	I0229 01:12:31.059381  324746 ssh_runner.go:195] Run: which crictl
	I0229 01:12:31.063795  324746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 01:12:31.100015  324746 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 01:12:31.100139  324746 ssh_runner.go:195] Run: crio --version
	I0229 01:12:31.131983  324746 ssh_runner.go:195] Run: crio --version
	I0229 01:12:31.165463  324746 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 01:12:31.167057  324746 main.go:141] libmachine: (addons-600097) Calling .GetIP
	I0229 01:12:31.169740  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:31.170097  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:31.170126  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:31.170349  324746 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 01:12:31.175052  324746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:12:31.188928  324746 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 01:12:31.188979  324746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 01:12:31.225759  324746 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 01:12:31.225858  324746 ssh_runner.go:195] Run: which lz4
	I0229 01:12:31.231184  324746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 01:12:31.235894  324746 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 01:12:31.235951  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 01:12:32.960025  324746 crio.go:444] Took 1.728887 seconds to copy over tarball
	I0229 01:12:32.960122  324746 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 01:12:35.879974  324746 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.919813004s)
	I0229 01:12:35.880007  324746 crio.go:451] Took 2.919948 seconds to extract the tarball
	I0229 01:12:35.880018  324746 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 01:12:35.925983  324746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 01:12:35.983200  324746 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 01:12:35.983234  324746 cache_images.go:84] Images are preloaded, skipping loading
	I0229 01:12:35.983317  324746 ssh_runner.go:195] Run: crio config
	I0229 01:12:36.043493  324746 cni.go:84] Creating CNI manager for ""
	I0229 01:12:36.043525  324746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 01:12:36.043551  324746 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 01:12:36.043575  324746 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.181 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-600097 NodeName:addons-600097 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 01:12:36.043795  324746 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-600097"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:12:36.043889  324746 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-600097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-600097 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:12:36.043970  324746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 01:12:36.056298  324746 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:12:36.056370  324746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 01:12:36.068977  324746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0229 01:12:36.089716  324746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 01:12:36.110414  324746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0229 01:12:36.131463  324746 ssh_runner.go:195] Run: grep 192.168.39.181	control-plane.minikube.internal$ /etc/hosts
	I0229 01:12:36.136259  324746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:12:36.152295  324746 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097 for IP: 192.168.39.181
	I0229 01:12:36.152338  324746 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.152482  324746 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 01:12:36.276858  324746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt ...
	I0229 01:12:36.276894  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt: {Name:mk193ee721ad2abcc60b7c061dc7c62a3de798cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.277056  324746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key ...
	I0229 01:12:36.277067  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key: {Name:mk1521f75403bd7da4291280d460d1915bb5045b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.277138  324746 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 01:12:36.322712  324746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt ...
	I0229 01:12:36.322740  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt: {Name:mk3b4f192034ba0b786cd41aeb52fee609cb164d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.322893  324746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key ...
	I0229 01:12:36.322904  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key: {Name:mka1c6506c3df4f07511468c975fce6d6408c79e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.323005  324746 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.key
	I0229 01:12:36.323019  324746 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt with IP's: []
	I0229 01:12:36.407422  324746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt ...
	I0229 01:12:36.407456  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: {Name:mk19674df63e3f5d7d45057f34134d3f56e1ca82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.407614  324746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.key ...
	I0229 01:12:36.407627  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.key: {Name:mk79823e4c989cc5197f7db8a637f177801a3e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.407703  324746 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.key.8841b717
	I0229 01:12:36.407721  324746 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.crt.8841b717 with IP's: [192.168.39.181 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 01:12:36.557904  324746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.crt.8841b717 ...
	I0229 01:12:36.557944  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.crt.8841b717: {Name:mkb3f015b83c12c3372edcfb215034b00c91b960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.558103  324746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.key.8841b717 ...
	I0229 01:12:36.558115  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.key.8841b717: {Name:mkf467c11b14d1ad5ca1e8e193d5e9807f316b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.558184  324746 certs.go:337] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.crt.8841b717 -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.crt
	I0229 01:12:36.558306  324746 certs.go:341] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.key.8841b717 -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.key
	I0229 01:12:36.558361  324746 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.key
	I0229 01:12:36.558376  324746 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.crt with IP's: []
	I0229 01:12:36.786979  324746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.crt ...
	I0229 01:12:36.787016  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.crt: {Name:mk2d5de8954296b1b84fda3b82111363c26b2900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.787177  324746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.key ...
	I0229 01:12:36.787197  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.key: {Name:mk7cf24ca02f209f26008edf8707c38c220b4a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.787386  324746 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 01:12:36.787426  324746 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 01:12:36.787457  324746 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:12:36.787493  324746 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 01:12:36.788346  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 01:12:36.818848  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 01:12:36.845293  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 01:12:36.872773  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 01:12:36.899044  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:12:36.924984  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 01:12:36.953695  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:12:36.980083  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 01:12:37.006963  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:12:37.033915  324746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 01:12:37.053190  324746 ssh_runner.go:195] Run: openssl version
	I0229 01:12:37.059791  324746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:12:37.073018  324746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:12:37.078350  324746 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:12:37.078429  324746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:12:37.084748  324746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:12:37.097736  324746 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:12:37.102571  324746 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 01:12:37.102632  324746 kubeadm.go:404] StartCluster: {Name:addons-600097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-600097 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.181 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:12:37.102733  324746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 01:12:37.102780  324746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 01:12:37.142467  324746 cri.go:89] found id: ""
	I0229 01:12:37.142573  324746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 01:12:37.154433  324746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:12:37.165816  324746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:12:37.177330  324746 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:12:37.177383  324746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 01:12:37.231806  324746 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 01:12:37.231950  324746 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:12:37.373865  324746 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:12:37.373985  324746 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:12:37.374113  324746 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:12:37.601778  324746 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:12:37.830056  324746 out.go:204]   - Generating certificates and keys ...
	I0229 01:12:37.830191  324746 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:12:37.830303  324746 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:12:37.830412  324746 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 01:12:38.109208  324746 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 01:12:38.349761  324746 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 01:12:38.580724  324746 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 01:12:38.831557  324746 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 01:12:38.831710  324746 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-600097 localhost] and IPs [192.168.39.181 127.0.0.1 ::1]
	I0229 01:12:38.945419  324746 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 01:12:38.945599  324746 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-600097 localhost] and IPs [192.168.39.181 127.0.0.1 ::1]
	I0229 01:12:39.092599  324746 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 01:12:39.164895  324746 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 01:12:39.316641  324746 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 01:12:39.316976  324746 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:12:39.433881  324746 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:12:39.628239  324746 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:12:40.008814  324746 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:12:40.183951  324746 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:12:40.184533  324746 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:12:40.186918  324746 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:12:40.188781  324746 out.go:204]   - Booting up control plane ...
	I0229 01:12:40.188904  324746 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:12:40.189846  324746 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:12:40.191257  324746 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:12:40.212699  324746 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:12:40.212840  324746 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:12:40.212907  324746 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 01:12:40.344832  324746 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:12:46.344485  324746 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.002991 seconds
	I0229 01:12:46.344650  324746 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 01:12:46.363156  324746 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 01:12:46.895676  324746 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 01:12:46.895905  324746 kubeadm.go:322] [mark-control-plane] Marking the node addons-600097 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 01:12:47.413021  324746 kubeadm.go:322] [bootstrap-token] Using token: i2768e.hjj2wzw3cu3l808f
	I0229 01:12:47.414777  324746 out.go:204]   - Configuring RBAC rules ...
	I0229 01:12:47.414944  324746 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 01:12:47.421765  324746 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 01:12:47.432939  324746 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 01:12:47.436971  324746 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 01:12:47.442068  324746 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 01:12:47.448488  324746 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 01:12:47.464555  324746 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 01:12:47.722639  324746 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 01:12:47.841469  324746 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 01:12:47.841513  324746 kubeadm.go:322] 
	I0229 01:12:47.841597  324746 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 01:12:47.841610  324746 kubeadm.go:322] 
	I0229 01:12:47.841715  324746 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 01:12:47.841731  324746 kubeadm.go:322] 
	I0229 01:12:47.841770  324746 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 01:12:47.841866  324746 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 01:12:47.841946  324746 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 01:12:47.841963  324746 kubeadm.go:322] 
	I0229 01:12:47.842042  324746 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 01:12:47.842051  324746 kubeadm.go:322] 
	I0229 01:12:47.842149  324746 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 01:12:47.842168  324746 kubeadm.go:322] 
	I0229 01:12:47.842273  324746 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 01:12:47.842385  324746 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 01:12:47.842482  324746 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 01:12:47.842497  324746 kubeadm.go:322] 
	I0229 01:12:47.842634  324746 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 01:12:47.842739  324746 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 01:12:47.842749  324746 kubeadm.go:322] 
	I0229 01:12:47.842849  324746 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token i2768e.hjj2wzw3cu3l808f \
	I0229 01:12:47.842973  324746 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 \
	I0229 01:12:47.843009  324746 kubeadm.go:322] 	--control-plane 
	I0229 01:12:47.843018  324746 kubeadm.go:322] 
	I0229 01:12:47.843118  324746 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 01:12:47.843130  324746 kubeadm.go:322] 
	I0229 01:12:47.843242  324746 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token i2768e.hjj2wzw3cu3l808f \
	I0229 01:12:47.843364  324746 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 
	I0229 01:12:47.843528  324746 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:12:47.843559  324746 cni.go:84] Creating CNI manager for ""
	I0229 01:12:47.843569  324746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 01:12:47.845124  324746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 01:12:47.846516  324746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 01:12:47.882685  324746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 01:12:47.922511  324746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 01:12:47.922601  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:47.922661  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=addons-600097 minikube.k8s.io/updated_at=2024_02_29T01_12_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:48.033931  324746 ops.go:34] apiserver oom_adj: -16
	I0229 01:12:48.146850  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:48.647492  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:49.147438  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:49.647522  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:50.147456  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:50.646841  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:51.146903  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:51.646999  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:52.147147  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:52.646959  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:53.146997  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:53.647526  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:54.147011  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:54.646984  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:55.147031  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:55.647467  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:56.147548  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:56.647582  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:57.147136  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:57.647235  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:58.147805  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:58.647063  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:59.147396  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:59.647124  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:13:00.147511  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:13:00.647154  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:13:01.147216  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:13:01.377718  324746 kubeadm.go:1088] duration metric: took 13.455183803s to wait for elevateKubeSystemPrivileges.
	I0229 01:13:01.377767  324746 kubeadm.go:406] StartCluster complete in 24.275141265s
	I0229 01:13:01.377805  324746 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:13:01.377961  324746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:13:01.378707  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:13:01.379000  324746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 01:13:01.379168  324746 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0229 01:13:01.379260  324746 addons.go:69] Setting yakd=true in profile "addons-600097"
	I0229 01:13:01.379264  324746 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-600097"
	I0229 01:13:01.379282  324746 addons.go:234] Setting addon yakd=true in "addons-600097"
	I0229 01:13:01.379340  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.379343  324746 config.go:182] Loaded profile config "addons-600097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:13:01.379361  324746 addons.go:69] Setting cloud-spanner=true in profile "addons-600097"
	I0229 01:13:01.379377  324746 addons.go:234] Setting addon cloud-spanner=true in "addons-600097"
	I0229 01:13:01.379354  324746 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-600097"
	I0229 01:13:01.379429  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.379452  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.379775  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.379789  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.379801  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.379808  324746 addons.go:69] Setting inspektor-gadget=true in profile "addons-600097"
	I0229 01:13:01.379824  324746 addons.go:234] Setting addon inspektor-gadget=true in "addons-600097"
	I0229 01:13:01.379825  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.379853  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.379867  324746 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-600097"
	I0229 01:13:01.379883  324746 addons.go:69] Setting default-storageclass=true in profile "addons-600097"
	I0229 01:13:01.379896  324746 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-600097"
	I0229 01:13:01.379902  324746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-600097"
	I0229 01:13:01.379873  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.380186  324746 addons.go:69] Setting helm-tiller=true in profile "addons-600097"
	I0229 01:13:01.380209  324746 addons.go:234] Setting addon helm-tiller=true in "addons-600097"
	I0229 01:13:01.380255  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.380259  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.380282  324746 addons.go:69] Setting registry=true in profile "addons-600097"
	I0229 01:13:01.380293  324746 addons.go:234] Setting addon registry=true in "addons-600097"
	I0229 01:13:01.380332  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.380342  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.380377  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.380434  324746 addons.go:69] Setting ingress=true in profile "addons-600097"
	I0229 01:13:01.380448  324746 addons.go:234] Setting addon ingress=true in "addons-600097"
	I0229 01:13:01.380616  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.380639  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.380674  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.380696  324746 addons.go:69] Setting gcp-auth=true in profile "addons-600097"
	I0229 01:13:01.380704  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.380712  324746 mustload.go:65] Loading cluster: addons-600097
	I0229 01:13:01.380754  324746 addons.go:69] Setting storage-provisioner=true in profile "addons-600097"
	I0229 01:13:01.380767  324746 addons.go:234] Setting addon storage-provisioner=true in "addons-600097"
	I0229 01:13:01.380801  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.380890  324746 config.go:182] Loaded profile config "addons-600097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:13:01.381092  324746 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-600097"
	I0229 01:13:01.381123  324746 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-600097"
	I0229 01:13:01.381148  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.381176  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.381203  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.381221  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.381221  324746 addons.go:69] Setting ingress-dns=true in profile "addons-600097"
	I0229 01:13:01.381235  324746 addons.go:234] Setting addon ingress-dns=true in "addons-600097"
	I0229 01:13:01.381252  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.379831  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.381302  324746 addons.go:69] Setting volumesnapshots=true in profile "addons-600097"
	I0229 01:13:01.381313  324746 addons.go:234] Setting addon volumesnapshots=true in "addons-600097"
	I0229 01:13:01.381531  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.381562  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.379856  324746 addons.go:69] Setting metrics-server=true in profile "addons-600097"
	I0229 01:13:01.381628  324746 addons.go:234] Setting addon metrics-server=true in "addons-600097"
	I0229 01:13:01.381673  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.381790  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.381820  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.382235  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.382582  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.382609  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.382923  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.383285  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.383304  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.401142  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43055
	I0229 01:13:01.401151  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0229 01:13:01.401805  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.401919  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.402516  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.402543  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.402686  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.402708  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.403034  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.403560  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.403604  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.403828  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.403857  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0229 01:13:01.404656  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.404707  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.405142  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.405761  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.405790  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.405866  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38173
	I0229 01:13:01.406142  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.406700  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.406737  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.406774  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.406799  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.406840  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.406875  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.407757  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.407789  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.417323  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37033
	I0229 01:13:01.418458  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36963
	I0229 01:13:01.418683  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
	I0229 01:13:01.418687  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0229 01:13:01.419273  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.419445  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.419992  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.420012  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.420161  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.420171  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.420238  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.420558  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.420640  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.420882  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.420945  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.420991  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.422129  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.422156  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.422468  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.422487  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.422607  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.422616  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.422982  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.423039  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.423590  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.423624  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.423795  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.425526  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.425547  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.425609  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.426330  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.426365  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.430454  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.431063  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.431089  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.431528  324746 addons.go:234] Setting addon default-storageclass=true in "addons-600097"
	I0229 01:13:01.431581  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.431996  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.432048  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.440558  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37273
	I0229 01:13:01.440716  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0229 01:13:01.441238  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.441844  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.441865  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.442319  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.442765  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.442940  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.442993  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.446682  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.446703  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.446918  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0229 01:13:01.447471  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.447559  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.448061  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.448078  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.448198  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0229 01:13:01.448380  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.448663  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.448826  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.449331  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.449350  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.449713  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.450309  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.450350  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.450559  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.453836  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41857
	I0229 01:13:01.453975  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41035
	I0229 01:13:01.454473  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.454974  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.454992  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.455478  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.456141  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.456181  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.456720  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
	I0229 01:13:01.456906  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.457095  324746 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-600097"
	I0229 01:13:01.457144  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.457495  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.457512  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.457560  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.457601  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.457931  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.458558  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.458584  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.458596  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.460489  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0229 01:13:01.459078  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.461230  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0229 01:13:01.463306  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0229 01:13:01.462157  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.462565  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.464440  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.465967  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0229 01:13:01.464885  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34939
	I0229 01:13:01.465276  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.465683  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.468251  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0229 01:13:01.467239  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.467473  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.467513  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.470504  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0229 01:13:01.469977  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.470084  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.470596  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.470756  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.471549  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.471801  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0229 01:13:01.472490  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45575
	I0229 01:13:01.472520  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.474312  324746 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0229 01:13:01.473128  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0229 01:13:01.473375  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.473911  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.474199  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.475662  324746 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0229 01:13:01.475678  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0229 01:13:01.475698  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.477375  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0229 01:13:01.476014  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.479589  324746 out.go:177]   - Using image docker.io/registry:2.8.3
	I0229 01:13:01.478456  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0229 01:13:01.478479  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.478660  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.479295  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.481881  324746 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0229 01:13:01.480754  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0229 01:13:01.480800  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.480900  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.481114  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.483218  324746 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0229 01:13:01.483243  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0229 01:13:01.483262  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.483296  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.483314  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.483338  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.483634  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0229 01:13:01.484054  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.484243  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45727
	I0229 01:13:01.484263  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.484389  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.486037  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44253
	I0229 01:13:01.486167  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46619
	I0229 01:13:01.486663  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.486985  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.487168  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.487181  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.487288  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.487295  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.487305  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.488710  324746 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0229 01:13:01.487643  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.487701  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.487903  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.487930  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.488145  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.488380  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.488521  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.489643  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.489885  324746 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0229 01:13:01.489896  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.489904  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0229 01:13:01.489919  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.489921  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.489931  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.490003  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.490024  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.490191  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.490202  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.490258  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.490364  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.490417  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.490597  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.490669  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.490703  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.490922  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.490967  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.491312  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.492392  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.492416  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.492920  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.494781  324746 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0229 01:13:01.496022  324746 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0229 01:13:01.493951  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.494283  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.494995  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.496130  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.496155  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.495720  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0229 01:13:01.495761  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.496046  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0229 01:13:01.496308  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.496330  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.496648  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.498359  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0229 01:13:01.499487  324746 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0229 01:13:01.499504  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0229 01:13:01.498641  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.499522  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.499552  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.497905  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.498018  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0229 01:13:01.497495  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.499814  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.499052  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.499410  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0229 01:13:01.499880  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.499913  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.500212  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.500231  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.500665  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.500690  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.500978  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.500996  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.501128  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.501143  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.501204  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.501467  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.501482  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.501602  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.501622  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.501882  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.501937  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.502077  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.502127  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.502161  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.504086  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.505888  324746 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0229 01:13:01.504484  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.505435  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.506028  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.507225  324746 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 01:13:01.507238  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 01:13:01.507256  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.507342  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.507365  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.507561  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.509002  324746 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0229 01:13:01.507847  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.510393  324746 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 01:13:01.510405  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0229 01:13:01.510422  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.511112  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
	I0229 01:13:01.511261  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I0229 01:13:01.511360  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.511637  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.511896  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.512428  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.512629  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.512643  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.512832  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.513782  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.513573  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.513809  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.514280  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.514376  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.514497  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.515056  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.515095  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.515552  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41231
	I0229 01:13:01.515675  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.515771  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.515960  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.516258  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.516336  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.516352  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.516498  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.516523  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44807
	I0229 01:13:01.516558  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.516823  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.517102  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.517194  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.517354  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.517501  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.517675  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.517689  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.517796  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.517812  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.518148  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.518165  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.518380  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.518435  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.520031  324746 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0229 01:13:01.518814  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.519693  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.521346  324746 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0229 01:13:01.521366  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0229 01:13:01.521387  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.523255  324746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:13:01.522116  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.524568  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.525783  324746 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.4
	I0229 01:13:01.524742  324746 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 01:13:01.524918  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.525118  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.526971  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 01:13:01.527004  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.527004  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.527093  324746 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0229 01:13:01.527109  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0229 01:13:01.527120  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.527310  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.527485  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.528018  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.530615  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.530782  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.531092  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.531113  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.531216  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.531239  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.531440  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.531503  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.531665  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.531714  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.531832  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.531846  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42839
	I0229 01:13:01.531870  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.531980  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.532041  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.532776  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.533314  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.533338  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.533803  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.534000  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.535657  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.535915  324746 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 01:13:01.535935  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 01:13:01.535953  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.536551  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0229 01:13:01.537066  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.537167  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0229 01:13:01.537846  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.537869  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.537888  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.538296  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.538503  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.538511  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.538526  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.538981  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.539189  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.540347  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.542144  324746 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0229 01:13:01.540821  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.541002  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.541424  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.544604  324746 out.go:177]   - Using image docker.io/busybox:stable
	I0229 01:13:01.543445  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.543492  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.547256  324746 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 01:13:01.545823  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.545943  324746 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0229 01:13:01.546124  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.548450  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0229 01:13:01.548466  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.549840  324746 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 01:13:01.548587  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.551008  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.552501  324746 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.6
	I0229 01:13:01.551392  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.552531  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.551553  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.553927  324746 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 01:13:01.553947  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0229 01:13:01.553965  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.554018  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.554157  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.554313  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.556851  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.557253  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.557303  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.557438  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.557627  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.557821  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.557975  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.859438  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 01:13:01.888146  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0229 01:13:01.889739  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0229 01:13:01.891123  324746 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-600097" context rescaled to 1 replicas
	I0229 01:13:01.891157  324746 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.181 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 01:13:01.892969  324746 out.go:177] * Verifying Kubernetes components...
	I0229 01:13:01.894268  324746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:13:01.936643  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 01:13:02.067493  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0229 01:13:02.067520  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0229 01:13:02.077550  324746 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0229 01:13:02.077572  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0229 01:13:02.120391  324746 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0229 01:13:02.120418  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0229 01:13:02.133074  324746 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0229 01:13:02.133096  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0229 01:13:02.177553  324746 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0229 01:13:02.177578  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0229 01:13:02.208140  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 01:13:02.209571  324746 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0229 01:13:02.209592  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0229 01:13:02.252152  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 01:13:02.256537  324746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 01:13:02.256561  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0229 01:13:02.292311  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0229 01:13:02.383965  324746 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.00492685s)
	I0229 01:13:02.384121  324746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 01:13:02.390529  324746 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0229 01:13:02.390550  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0229 01:13:02.412685  324746 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0229 01:13:02.412708  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0229 01:13:02.416709  324746 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0229 01:13:02.416725  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0229 01:13:02.425279  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0229 01:13:02.425308  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0229 01:13:02.437826  324746 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0229 01:13:02.437852  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0229 01:13:02.559211  324746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 01:13:02.559248  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 01:13:02.562856  324746 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0229 01:13:02.562874  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0229 01:13:02.720091  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0229 01:13:02.741145  324746 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0229 01:13:02.741177  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0229 01:13:02.747605  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0229 01:13:02.757979  324746 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0229 01:13:02.758001  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0229 01:13:02.789632  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0229 01:13:02.789658  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0229 01:13:02.881663  324746 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0229 01:13:02.881691  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0229 01:13:02.977472  324746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 01:13:02.977506  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 01:13:03.011896  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0229 01:13:03.011924  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0229 01:13:03.147619  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0229 01:13:03.147651  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0229 01:13:03.160802  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0229 01:13:03.160830  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0229 01:13:03.191987  324746 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0229 01:13:03.192021  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0229 01:13:03.273364  324746 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0229 01:13:03.273387  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0229 01:13:03.285187  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 01:13:03.389404  324746 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 01:13:03.389430  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0229 01:13:03.413139  324746 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0229 01:13:03.413162  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0229 01:13:03.442535  324746 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0229 01:13:03.442564  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0229 01:13:03.517522  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0229 01:13:03.615031  324746 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0229 01:13:03.615061  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0229 01:13:03.622107  324746 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0229 01:13:03.622130  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0229 01:13:03.628091  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 01:13:03.747989  324746 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0229 01:13:03.748030  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0229 01:13:03.773706  324746 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0229 01:13:03.773729  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0229 01:13:03.885650  324746 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0229 01:13:03.885674  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0229 01:13:03.926784  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0229 01:13:03.994900  324746 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0229 01:13:03.994926  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0229 01:13:04.135899  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0229 01:13:06.029590  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.1701135s)
	I0229 01:13:06.029663  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:06.029676  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:06.030051  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:06.030074  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:06.030088  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:06.030096  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:06.030101  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:06.030389  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:06.030445  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:06.030413  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:06.037671  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:06.037691  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:06.038038  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:06.038075  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:06.038092  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:07.166730  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.278516924s)
	I0229 01:13:07.166807  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:07.166826  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:07.167263  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:07.167284  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:07.167294  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:07.167303  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:07.167304  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:07.167543  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:07.167558  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:08.084574  324746 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0229 01:13:08.084618  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:08.088035  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:08.088454  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:08.088482  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:08.088650  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:08.088862  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:08.089043  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:08.089203  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:08.788248  324746 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0229 01:13:08.935184  324746 addons.go:234] Setting addon gcp-auth=true in "addons-600097"
	I0229 01:13:08.935257  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:08.935613  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:08.935651  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:08.951587  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I0229 01:13:08.952061  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:08.952621  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:08.952648  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:08.953011  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:08.953647  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:08.953680  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:08.985322  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35857
	I0229 01:13:08.985829  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:08.986378  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:08.986412  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:08.986779  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:08.987038  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:08.988482  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:08.988738  324746 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0229 01:13:08.988762  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:08.991640  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:08.992152  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:08.992182  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:08.992329  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:08.992527  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:08.992694  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:08.992843  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:09.809036  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.919247429s)
	I0229 01:13:09.809109  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:09.809119  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:09.809049  324746 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (7.914750701s)
	I0229 01:13:09.809421  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:09.809445  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:09.809456  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:09.809464  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:09.809710  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:09.809727  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:09.810334  324746 node_ready.go:35] waiting up to 6m0s for node "addons-600097" to be "Ready" ...
	I0229 01:13:09.898769  324746 node_ready.go:49] node "addons-600097" has status "Ready":"True"
	I0229 01:13:09.898805  324746 node_ready.go:38] duration metric: took 88.44004ms waiting for node "addons-600097" to be "Ready" ...
	I0229 01:13:09.898820  324746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:13:09.945986  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:09.946016  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:09.946454  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:09.946483  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:09.946513  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:09.964826  324746 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4pcrt" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:09.989066  324746 pod_ready.go:92] pod "coredns-5dd5756b68-4pcrt" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:09.989104  324746 pod_ready.go:81] duration metric: took 24.247475ms waiting for pod "coredns-5dd5756b68-4pcrt" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:09.989119  324746 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9fvrj" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.009199  324746 pod_ready.go:97] pod "coredns-5dd5756b68-9fvrj" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:00 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:00 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.181 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-02-29 01:13:00 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerS
tateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-02-29 01:13:02 +0000 UTC,FinishedAt:2024-02-29 01:13:09 +0000 UTC,ContainerID:cri-o://3066173b826c9ea3e073b61e5596e170c8ae5e512b01d82cc952e365248f32ea,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://3066173b826c9ea3e073b61e5596e170c8ae5e512b01d82cc952e365248f32ea Started:0xc003303280 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0229 01:13:10.009236  324746 pod_ready.go:81] duration metric: took 20.108612ms waiting for pod "coredns-5dd5756b68-9fvrj" in "kube-system" namespace to be "Ready" ...
	E0229 01:13:10.009255  324746 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-9fvrj" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:00 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:00 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.181 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-02-29 01:13:00 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runni
ng:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-02-29 01:13:02 +0000 UTC,FinishedAt:2024-02-29 01:13:09 +0000 UTC,ContainerID:cri-o://3066173b826c9ea3e073b61e5596e170c8ae5e512b01d82cc952e365248f32ea,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://3066173b826c9ea3e073b61e5596e170c8ae5e512b01d82cc952e365248f32ea Started:0xc003303280 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0229 01:13:10.009264  324746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.039930  324746 pod_ready.go:92] pod "etcd-addons-600097" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:10.039960  324746 pod_ready.go:81] duration metric: took 30.686865ms waiting for pod "etcd-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.039974  324746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.054081  324746 pod_ready.go:92] pod "kube-apiserver-addons-600097" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:10.054106  324746 pod_ready.go:81] duration metric: took 14.124935ms waiting for pod "kube-apiserver-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.054117  324746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.224823  324746 pod_ready.go:92] pod "kube-controller-manager-addons-600097" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:10.224850  324746 pod_ready.go:81] duration metric: took 170.727451ms waiting for pod "kube-controller-manager-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.224863  324746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9h94v" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.615408  324746 pod_ready.go:92] pod "kube-proxy-9h94v" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:10.615436  324746 pod_ready.go:81] duration metric: took 390.566786ms waiting for pod "kube-proxy-9h94v" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.615446  324746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:11.013870  324746 pod_ready.go:92] pod "kube-scheduler-addons-600097" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:11.013896  324746 pod_ready.go:81] duration metric: took 398.443377ms waiting for pod "kube-scheduler-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:11.013913  324746 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:11.741742  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.533558773s)
	I0229 01:13:11.741820  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.741817  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.489627997s)
	I0229 01:13:11.741863  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.741864  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.805189163s)
	I0229 01:13:11.741833  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.741899  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.741917  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.741949  324746 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.357788514s)
	I0229 01:13:11.741866  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.449530103s)
	I0229 01:13:11.741992  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.741994  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.02187216s)
	I0229 01:13:11.742007  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742002  324746 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0229 01:13:11.741879  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742031  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742043  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742040  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.994400636s)
	I0229 01:13:11.742076  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742087  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742132  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.456915888s)
	I0229 01:13:11.742153  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742163  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742209  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.224654106s)
	I0229 01:13:11.742239  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742250  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742316  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.114193215s)
	W0229 01:13:11.742345  324746 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0229 01:13:11.742391  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.815559786s)
	I0229 01:13:11.742403  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.742420  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.742423  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.742433  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.742434  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.742438  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742444  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.742448  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742452  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742461  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742382  324746 retry.go:31] will retry after 361.34681ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0229 01:13:11.742451  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.742490  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.742498  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.742501  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.742508  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742425  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.742515  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.742517  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742522  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.742471  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.742534  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742541  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742509  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742570  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742453  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742604  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.742613  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.746349  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.746366  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.746375  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.746386  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.746435  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.746457  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.746482  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.746489  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.746496  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.746552  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.746577  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.746583  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.746592  324746 addons.go:470] Verifying addon ingress=true in "addons-600097"
	I0229 01:13:11.748340  324746 out.go:177] * Verifying ingress addon...
	I0229 01:13:11.746897  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.746925  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.746986  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.747005  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747026  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.747038  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747041  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747067  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.747074  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747078  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.747098  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.747102  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747116  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.747119  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747667  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747699  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.749838  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749850  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749868  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749874  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749878  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749900  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.749915  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.749942  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749958  324746 addons.go:470] Verifying addon metrics-server=true in "addons-600097"
	I0229 01:13:11.749975  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749943  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.750012  324746 addons.go:470] Verifying addon registry=true in "addons-600097"
	I0229 01:13:11.749947  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.750029  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.751460  324746 out.go:177] * Verifying registry addon...
	I0229 01:13:11.750175  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.750240  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.750251  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.750288  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.750789  324746 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0229 01:13:11.752635  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.752682  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.754000  324746 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-600097 service yakd-dashboard -n yakd-dashboard
	
	I0229 01:13:11.753290  324746 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0229 01:13:11.764863  324746 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0229 01:13:11.764893  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:11.773063  324746 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0229 01:13:11.773080  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:12.104163  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 01:13:12.234846  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.098868184s)
	I0229 01:13:12.234889  324746 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.246125084s)
	I0229 01:13:12.234920  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:12.234935  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:12.236620  324746 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 01:13:12.235257  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:12.235297  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:12.237883  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:12.237905  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:12.237918  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:12.239164  324746 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0229 01:13:12.240550  324746 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0229 01:13:12.240573  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0229 01:13:12.238215  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:12.238252  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:12.240604  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:12.240625  324746 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-600097"
	I0229 01:13:12.242387  324746 out.go:177] * Verifying csi-hostpath-driver addon...
	I0229 01:13:12.244298  324746 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0229 01:13:12.253563  324746 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0229 01:13:12.253582  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:12.264682  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:12.266782  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:12.323117  324746 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0229 01:13:12.323146  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0229 01:13:12.410200  324746 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0229 01:13:12.410238  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5447 bytes)
	I0229 01:13:12.490836  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0229 01:13:12.755271  324746 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0229 01:13:12.755297  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:12.770849  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:12.771299  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:13.023041  324746 pod_ready.go:102] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"False"
	I0229 01:13:13.327226  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:13.345669  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:13.345920  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:13.750287  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:13.759234  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:13.762140  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:14.253924  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:14.256222  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:14.261222  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:14.631430  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.527210744s)
	I0229 01:13:14.631508  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:14.631521  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:14.631854  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:14.631873  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:14.631883  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:14.631896  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:14.632158  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:14.632202  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:14.632243  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:14.802365  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:14.802752  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:14.802924  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:15.096186  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.605295689s)
	I0229 01:13:15.096247  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:15.096267  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:15.096609  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:15.096634  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:15.096643  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:15.096644  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:15.096651  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:15.096905  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:15.096920  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:15.097951  324746 addons.go:470] Verifying addon gcp-auth=true in "addons-600097"
	I0229 01:13:15.099717  324746 out.go:177] * Verifying gcp-auth addon...
	I0229 01:13:15.102076  324746 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0229 01:13:15.103622  324746 pod_ready.go:102] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"False"
	I0229 01:13:15.122633  324746 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0229 01:13:15.122652  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:15.250736  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:15.256707  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:15.266927  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:15.607302  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:15.753087  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:15.759765  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:15.764841  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:16.107385  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:16.251349  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:16.258039  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:16.261543  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:16.606454  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:16.751405  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:16.757056  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:16.759640  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:17.105808  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:17.250491  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:17.270295  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:17.271381  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:17.520743  324746 pod_ready.go:102] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"False"
	I0229 01:13:17.606014  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:18.011553  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:18.014977  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:18.015115  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:18.107411  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:18.250942  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:18.257352  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:18.259641  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:18.606911  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:18.750144  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:18.757007  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:18.759949  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:19.106743  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:19.250210  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:19.257484  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:19.260610  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:19.521201  324746 pod_ready.go:102] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"False"
	I0229 01:13:19.606923  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:19.751598  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:20.243615  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:20.245010  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:20.248396  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:20.254354  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:20.257531  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:20.259581  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:20.606277  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:20.750095  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:20.756931  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:20.760097  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:21.106288  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:21.250390  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:21.257964  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:21.260202  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:21.612545  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:21.750045  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:21.757593  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:21.765374  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:22.020725  324746 pod_ready.go:102] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"False"
	I0229 01:13:22.106795  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:22.250080  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:22.257648  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:22.263861  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:22.606937  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:22.751379  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:22.757297  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:22.761035  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:23.106326  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:23.251520  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:23.256607  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:23.263991  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:23.611385  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:23.752583  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:23.758440  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:23.760133  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:24.106500  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:24.268566  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:24.269504  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:24.274238  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:24.520905  324746 pod_ready.go:102] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"False"
	I0229 01:13:24.606053  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:24.751163  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:24.757530  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:24.760138  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:25.028556  324746 pod_ready.go:92] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:25.028591  324746 pod_ready.go:81] duration metric: took 14.014669893s waiting for pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:25.028606  324746 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-qctgj" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:25.037194  324746 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-qctgj" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:25.037218  324746 pod_ready.go:81] duration metric: took 8.604188ms waiting for pod "nvidia-device-plugin-daemonset-qctgj" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:25.037235  324746 pod_ready.go:38] duration metric: took 15.138402406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:13:25.037251  324746 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:13:25.037302  324746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:13:25.056671  324746 api_server.go:72] duration metric: took 23.165477386s to wait for apiserver process to appear ...
	I0229 01:13:25.056705  324746 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:13:25.056734  324746 api_server.go:253] Checking apiserver healthz at https://192.168.39.181:8443/healthz ...
	I0229 01:13:25.065681  324746 api_server.go:279] https://192.168.39.181:8443/healthz returned 200:
	ok
	I0229 01:13:25.067172  324746 api_server.go:141] control plane version: v1.28.4
	I0229 01:13:25.067204  324746 api_server.go:131] duration metric: took 10.490036ms to wait for apiserver health ...
	I0229 01:13:25.067215  324746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:13:25.079926  324746 system_pods.go:59] 18 kube-system pods found
	I0229 01:13:25.079953  324746 system_pods.go:61] "coredns-5dd5756b68-4pcrt" [3eb43d6f-14c6-42de-be44-4441b9f518ff] Running
	I0229 01:13:25.079960  324746 system_pods.go:61] "csi-hostpath-attacher-0" [d0230873-4868-4afc-9928-0dd97f8361e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0229 01:13:25.079976  324746 system_pods.go:61] "csi-hostpath-resizer-0" [96d4e7b6-6974-4d78-a074-175d8b634226] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0229 01:13:25.079989  324746 system_pods.go:61] "csi-hostpathplugin-qp8h8" [d8ff48fd-0803-4e5a-8d3d-71b3c9399207] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0229 01:13:25.080000  324746 system_pods.go:61] "etcd-addons-600097" [807e5dc6-85a5-40d2-8fc3-de8285d05e68] Running
	I0229 01:13:25.080010  324746 system_pods.go:61] "kube-apiserver-addons-600097" [b5798f77-a50f-4e7a-b51a-7529a8e8152b] Running
	I0229 01:13:25.080019  324746 system_pods.go:61] "kube-controller-manager-addons-600097" [683a75b8-f632-4aa2-9375-8c0a3f3a443f] Running
	I0229 01:13:25.080030  324746 system_pods.go:61] "kube-ingress-dns-minikube" [bd4a21c2-8e95-404a-a7db-ee307a4d8899] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0229 01:13:25.080038  324746 system_pods.go:61] "kube-proxy-9h94v" [86903f1f-0d36-4812-acde-9145f651a025] Running
	I0229 01:13:25.080044  324746 system_pods.go:61] "kube-scheduler-addons-600097" [6cb4c51f-d912-471b-8c97-54c360e21d0b] Running
	I0229 01:13:25.080047  324746 system_pods.go:61] "metrics-server-69cf46c98-hrq8h" [e7098420-28d2-4a6b-a93d-4fefa31359b3] Running
	I0229 01:13:25.080053  324746 system_pods.go:61] "nvidia-device-plugin-daemonset-qctgj" [a6d1f69b-373d-49c1-a1da-9b03d99cc13c] Running
	I0229 01:13:25.080060  324746 system_pods.go:61] "registry-proxy-rntnp" [48e9e81a-42f9-4d1d-9354-285750cd1bd8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0229 01:13:25.080067  324746 system_pods.go:61] "registry-q4qbx" [44db4128-7109-4402-9de5-49bec8724d9f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0229 01:13:25.080073  324746 system_pods.go:61] "snapshot-controller-58dbcc7b99-9b2bf" [87a91b45-bf66-4d8c-a507-e1308617e2e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 01:13:25.080082  324746 system_pods.go:61] "snapshot-controller-58dbcc7b99-rt5hl" [c3d0545b-72bb-4f39-a718-5aa937bc37cf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 01:13:25.080086  324746 system_pods.go:61] "storage-provisioner" [c0d595aa-7503-497b-8719-8a82ca333df3] Running
	I0229 01:13:25.080092  324746 system_pods.go:61] "tiller-deploy-7b677967b9-w6sfn" [d68c9fec-87de-4b51-b793-1fce3f10efe2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0229 01:13:25.080104  324746 system_pods.go:74] duration metric: took 12.881249ms to wait for pod list to return data ...
	I0229 01:13:25.080119  324746 default_sa.go:34] waiting for default service account to be created ...
	I0229 01:13:25.083819  324746 default_sa.go:45] found service account: "default"
	I0229 01:13:25.083839  324746 default_sa.go:55] duration metric: took 3.70817ms for default service account to be created ...
	I0229 01:13:25.083849  324746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 01:13:25.097945  324746 system_pods.go:86] 18 kube-system pods found
	I0229 01:13:25.097972  324746 system_pods.go:89] "coredns-5dd5756b68-4pcrt" [3eb43d6f-14c6-42de-be44-4441b9f518ff] Running
	I0229 01:13:25.097980  324746 system_pods.go:89] "csi-hostpath-attacher-0" [d0230873-4868-4afc-9928-0dd97f8361e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0229 01:13:25.097986  324746 system_pods.go:89] "csi-hostpath-resizer-0" [96d4e7b6-6974-4d78-a074-175d8b634226] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0229 01:13:25.097995  324746 system_pods.go:89] "csi-hostpathplugin-qp8h8" [d8ff48fd-0803-4e5a-8d3d-71b3c9399207] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0229 01:13:25.098003  324746 system_pods.go:89] "etcd-addons-600097" [807e5dc6-85a5-40d2-8fc3-de8285d05e68] Running
	I0229 01:13:25.098008  324746 system_pods.go:89] "kube-apiserver-addons-600097" [b5798f77-a50f-4e7a-b51a-7529a8e8152b] Running
	I0229 01:13:25.098013  324746 system_pods.go:89] "kube-controller-manager-addons-600097" [683a75b8-f632-4aa2-9375-8c0a3f3a443f] Running
	I0229 01:13:25.098019  324746 system_pods.go:89] "kube-ingress-dns-minikube" [bd4a21c2-8e95-404a-a7db-ee307a4d8899] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0229 01:13:25.098029  324746 system_pods.go:89] "kube-proxy-9h94v" [86903f1f-0d36-4812-acde-9145f651a025] Running
	I0229 01:13:25.098036  324746 system_pods.go:89] "kube-scheduler-addons-600097" [6cb4c51f-d912-471b-8c97-54c360e21d0b] Running
	I0229 01:13:25.098040  324746 system_pods.go:89] "metrics-server-69cf46c98-hrq8h" [e7098420-28d2-4a6b-a93d-4fefa31359b3] Running
	I0229 01:13:25.098048  324746 system_pods.go:89] "nvidia-device-plugin-daemonset-qctgj" [a6d1f69b-373d-49c1-a1da-9b03d99cc13c] Running
	I0229 01:13:25.098053  324746 system_pods.go:89] "registry-proxy-rntnp" [48e9e81a-42f9-4d1d-9354-285750cd1bd8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0229 01:13:25.098064  324746 system_pods.go:89] "registry-q4qbx" [44db4128-7109-4402-9de5-49bec8724d9f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0229 01:13:25.098070  324746 system_pods.go:89] "snapshot-controller-58dbcc7b99-9b2bf" [87a91b45-bf66-4d8c-a507-e1308617e2e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 01:13:25.098076  324746 system_pods.go:89] "snapshot-controller-58dbcc7b99-rt5hl" [c3d0545b-72bb-4f39-a718-5aa937bc37cf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 01:13:25.098081  324746 system_pods.go:89] "storage-provisioner" [c0d595aa-7503-497b-8719-8a82ca333df3] Running
	I0229 01:13:25.098086  324746 system_pods.go:89] "tiller-deploy-7b677967b9-w6sfn" [d68c9fec-87de-4b51-b793-1fce3f10efe2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0229 01:13:25.098092  324746 system_pods.go:126] duration metric: took 14.237704ms to wait for k8s-apps to be running ...
	I0229 01:13:25.098100  324746 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 01:13:25.098144  324746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:13:25.115753  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:25.116555  324746 system_svc.go:56] duration metric: took 18.44499ms WaitForService to wait for kubelet.
	I0229 01:13:25.116587  324746 kubeadm.go:581] duration metric: took 23.225396491s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 01:13:25.116614  324746 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:13:25.121703  324746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:13:25.121763  324746 node_conditions.go:123] node cpu capacity is 2
	I0229 01:13:25.121783  324746 node_conditions.go:105] duration metric: took 5.162231ms to run NodePressure ...
	I0229 01:13:25.121801  324746 start.go:228] waiting for startup goroutines ...
	I0229 01:13:25.251490  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:25.262009  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:25.263859  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:25.606940  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:25.752739  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:25.756403  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:25.760191  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:26.108293  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:26.250578  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:26.256896  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:26.259774  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:26.607159  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:26.752011  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:26.757296  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:26.759574  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:27.105730  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:27.251019  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:27.258871  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:27.262721  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:27.606364  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:27.752212  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:27.756792  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:27.760258  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:28.106949  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:28.250860  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:28.257572  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:28.260438  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:28.794907  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:28.814334  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:28.816357  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:28.818673  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:29.106424  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:29.250850  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:29.258780  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:29.261282  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:29.606481  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:29.750711  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:29.756768  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:29.759853  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:30.106618  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:30.250122  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:30.257397  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:30.268705  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:30.606898  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:30.750556  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:30.757454  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:30.760729  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:31.107663  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:31.251971  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:31.260164  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:31.263332  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:31.606540  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:31.750869  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:31.757265  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:31.761363  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:32.106145  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:32.250746  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:32.257694  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:32.260393  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:32.606493  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:32.750748  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:32.757131  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:32.761475  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:33.106159  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:33.251610  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:33.260627  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:33.268869  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:33.606251  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:33.750843  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:33.757353  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:33.759120  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:34.107109  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:34.250615  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:34.256846  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:34.259516  324746 kapi.go:107] duration metric: took 22.506222093s to wait for kubernetes.io/minikube-addons=registry ...
	I0229 01:13:34.606373  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:34.751095  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:34.757870  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:35.106049  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:35.250752  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:35.256833  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:35.606414  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:35.757108  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:35.757727  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:36.352549  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:36.354689  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:36.356728  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:36.606332  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:36.751145  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:36.757026  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:37.106864  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:37.250822  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:37.256561  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:37.606009  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:37.758030  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:37.758990  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:38.107469  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:38.251772  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:38.257020  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:38.607065  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:38.751948  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:38.757872  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:39.106704  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:39.250715  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:39.256652  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:39.607503  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:39.749671  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:39.756376  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:40.105677  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:40.250047  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:40.257224  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:40.606366  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:40.750500  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:40.757953  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:41.185397  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:41.254066  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:41.268901  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:41.606076  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:41.751459  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:41.756279  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:42.108235  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:42.250355  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:42.257722  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:42.606599  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:42.750966  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:42.757045  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:43.106952  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:43.250590  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:43.257083  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:43.606206  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:43.753730  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:43.757466  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:44.106889  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:44.251201  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:44.257182  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:44.606441  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:44.751482  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:44.758157  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:45.106320  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:45.251905  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:45.258412  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:45.607871  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:45.751394  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:45.758676  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:46.107120  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:46.250485  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:46.256789  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:46.606435  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:46.750577  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:46.756797  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:47.106544  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:47.251889  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:47.259261  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:47.607676  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:47.751031  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:47.757116  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:48.106293  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:48.261055  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:48.261293  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:48.606516  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:48.750689  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:48.757405  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:49.106113  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:49.250909  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:49.258488  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:49.606187  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:49.750661  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:49.756499  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:50.106253  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:50.251487  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:50.261586  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:50.606602  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:50.750935  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:50.756998  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:51.108742  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:51.281206  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:51.282237  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:51.606577  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:51.750669  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:51.756484  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:52.106049  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:52.251105  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:52.257438  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:52.606189  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:52.751231  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:52.757327  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:53.106598  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:53.250248  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:53.257636  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:53.607334  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:53.752012  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:53.757107  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:54.107407  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:54.251906  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:54.257999  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:54.605762  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:54.751944  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:54.763769  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:55.107508  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:55.253356  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:55.257649  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:55.606590  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:55.754861  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:55.757378  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:56.108621  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:56.250920  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:56.257029  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:56.607050  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:56.751421  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:56.756044  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:57.106876  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:57.252530  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:57.257628  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:57.609196  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:57.751504  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:57.757463  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:58.106738  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:58.250777  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:58.257263  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:58.607577  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:58.749924  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:58.758151  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:59.109965  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:59.250258  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:59.257294  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:59.606919  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:59.752332  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:59.757170  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:00.106853  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:00.250407  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:00.257205  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:00.612346  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:00.750702  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:00.758083  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:01.107140  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:01.253015  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:01.257551  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:01.606089  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:01.750973  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:01.756751  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:02.106170  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:02.250711  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:02.256536  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:02.606008  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:02.752380  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:02.767710  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:03.108805  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:03.250026  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:03.257029  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:03.606711  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:03.819870  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:03.822361  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:04.107237  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:04.264066  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:04.264156  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:04.608817  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:04.750565  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:04.756260  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:05.107846  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:05.250152  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:05.257339  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:05.605734  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:05.752353  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:05.759268  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:06.106857  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:06.250982  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:06.257253  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:06.607922  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:06.751054  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:06.758418  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:07.107442  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:07.251274  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:07.257596  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:07.606742  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:07.751222  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:07.757434  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:08.106064  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:08.251984  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:08.257270  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:08.606843  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:08.750884  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:08.757128  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:09.108466  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:09.251404  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:09.257100  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:09.607364  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:09.751173  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:09.757357  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:10.106707  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:10.250956  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:10.256672  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:10.622941  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:10.751421  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:10.757181  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:11.186632  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:11.252908  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:11.257387  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:11.607034  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:11.751937  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:11.759072  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:12.107073  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:12.250329  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:12.257261  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:12.606984  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:12.750653  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:12.756755  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:13.105709  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:13.255929  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:13.266793  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:13.605572  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:13.749514  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:13.756889  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:14.106113  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:14.251066  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:14.257725  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:14.606093  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:14.751744  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:14.756450  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:15.106774  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:15.250192  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:15.257208  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:15.607341  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:15.752577  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:15.760503  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:16.109800  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:16.259077  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:16.261376  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:16.607558  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:16.751578  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:16.760192  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:17.108331  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:17.250581  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:17.261604  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:17.606955  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:17.764231  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:17.764302  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:18.111111  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:18.257009  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:18.262645  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:18.606995  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:18.751445  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:18.757807  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:19.112674  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:19.251142  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:19.258090  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:19.606217  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:19.751344  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:19.765406  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:20.112110  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:20.257464  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:20.258025  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:20.606366  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:20.751986  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:20.756763  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:21.107410  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:21.251209  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:21.257861  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:21.606266  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:21.751025  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:21.756799  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:22.106005  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:22.258838  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:22.267315  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:22.606966  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:22.750791  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:22.757569  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:23.107022  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:23.250847  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:23.257194  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:23.606957  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:23.751925  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:23.758067  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:24.106478  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:24.250702  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:24.257204  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:24.607255  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:24.750891  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:24.756787  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:25.106308  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:25.250894  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:25.257343  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:25.607208  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:25.750137  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:25.757152  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:26.108922  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:26.250717  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:26.256895  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:26.606540  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:26.749612  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:26.757707  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:27.107456  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:27.250625  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:27.258151  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:27.606974  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:27.751171  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:27.756989  324746 kapi.go:107] duration metric: took 1m16.006198048s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0229 01:14:28.194555  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:28.257655  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:28.605727  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:28.751025  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:29.108186  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:29.251342  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:29.607528  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:29.750452  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:30.113305  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:30.251151  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:30.606475  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:30.750681  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:31.105628  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:31.252800  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:31.606206  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:31.751524  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:32.107547  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:32.253273  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:32.606835  324746 kapi.go:107] duration metric: took 1m17.504755183s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0229 01:14:32.608784  324746 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-600097 cluster.
	I0229 01:14:32.610166  324746 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0229 01:14:32.611443  324746 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0229 01:14:32.764643  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:33.250415  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:33.750003  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:34.252541  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:34.752475  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:35.250406  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:35.750080  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:36.319162  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:36.750442  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:37.250468  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:37.753022  324746 kapi.go:107] duration metric: took 1m25.508722462s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0229 01:14:37.754862  324746 out.go:177] * Enabled addons: default-storageclass, cloud-spanner, storage-provisioner-rancher, inspektor-gadget, ingress-dns, helm-tiller, metrics-server, storage-provisioner, nvidia-device-plugin, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0229 01:14:37.756099  324746 addons.go:505] enable addons completed in 1m36.376936292s: enabled=[default-storageclass cloud-spanner storage-provisioner-rancher inspektor-gadget ingress-dns helm-tiller metrics-server storage-provisioner nvidia-device-plugin yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0229 01:14:37.756153  324746 start.go:233] waiting for cluster config update ...
	I0229 01:14:37.756179  324746 start.go:242] writing updated cluster config ...
	I0229 01:14:37.756501  324746 ssh_runner.go:195] Run: rm -f paused
	I0229 01:14:37.810781  324746 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 01:14:37.812619  324746 out.go:177] * Done! kubectl is now configured to use "addons-600097" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.872648411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709169456872620345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565686,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90453eee-29e9-44f9-8d58-d119fc1aba1b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.873442561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=339fa82e-4db9-47c2-85c7-8a5bb0c11707 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.873540412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=339fa82e-4db9-47c2-85c7-8a5bb0c11707 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.873882516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1661a8c3489e2b745a2bbe51088c95c21380e24bcca18eb2d5755a37c7282737,PodSandboxId:aaa54ac99c030cb1f896f4650d711c7f8264ce9ab4f53695d23568a301c2e2b7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1709169449512699462,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-bfl47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab46e76-6627-4755-8a68-2b5088917ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 28770aac,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed0911b7b784142335a8b34129dc6fda60d717da39302f22481e51ed400f4d1a,PodSandboxId:2a3948bd91a4ab740310e28e7bd15c89c7611b699718c2665074b02d3a0b6788,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3cb09943f099d7eadf10e50e2be686eaa43df402d5e9f3369164bb7f69d8fc79,State:CONTAINER_RUNNING,CreatedAt:1709169333253731662,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-lsqz4,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 86feeed9-5827-47a5-bcb1-f939810036ba,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2f87b897,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bac6187ed3b16ca3f8647be7b34bda8b4488cd9c3a05d0c7b486a144feb3629,PodSandboxId:5d394a514b98f19f9aae6ee22403004a54967a5ff2accb5e0f950441d6c2c043,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709169307569657306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: d714fecf-09b1-4cd0-b639-1b12d34e13b3,},Annotations:map[string]string{io.kubernetes.container.hash: 177a8c47,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e52b89c446a5aca696d9d9c8e0653b2a045ee5d7a02e149e8c7e01d4dd28c7,PodSandboxId:7eec2df001eb22efaa4e1f8b1669a9c7999ad46b0ca394028dca31a71fd34727,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709169272178893814,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-zccgt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0b9bb93b-6a21-45a5-b329-3f0735b3b8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4510e833,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5631fce45e2dfb0a87bbcfe51149a066915c9d85eff925a0d740594f9306be1d,PodSandboxId:aa3621ab1f6d9f4a17122630065f6ae5d9afdee4c711f5218a0541f331ae8551,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTA
INER_EXITED,CreatedAt:1709169255976962086,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rgrk5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: deed00d9-1a9e-46d5-a2ee-8bc5d56d7392,},Annotations:map[string]string{io.kubernetes.container.hash: ae3861f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eee5e70b7463dd497eb1543ae853da62232de16569a1547e45bed4c8d8e0acf,PodSandboxId:fe4b9f54145fd0f1dd97f4e1470f7e7b511b7341f8874f884930f6e28c416712,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562c
b08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169254385825660,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgdfs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4f370dc0-0da1-44ab-bfc6-54766f7b0faa,},Annotations:map[string]string{io.kubernetes.container.hash: 1872fdd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47278bfe07925227df31de5583b4df6361a65ae6493a34035cde22fe653823e,PodSandboxId:8d04fc535e877efdfdab69a7d7f6b76e57a1e72b17f1ef5c3e82cd9ef8d59168,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1709169231358794360,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-qmvcb,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e5aa7bf3-4864-4a99-89f8-7130c9effa51,},Annotations:map[string]string{io.kubernetes.container.hash: 5468fdce,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9704c1a775207dacdf60716c6037d07493e878532b1a09fa2d3fb47d621b818,PodSandboxId:96bf42d01fbe3a627dc33ba845027293cbf1bc383f9927cbef1035e2a5b9425d,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:19c696958fe8a63676ba26fa57114c33149168bbedfea2
46fc55e0e665c03098,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0745b508898e2aa68f29a3c7f21023d03feace165b2430bc2297d250e65009e0,State:CONTAINER_RUNNING,CreatedAt:1709169201576638973,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-qctgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d1f69b-373d-49c1-a1da-9b03d99cc13c,},Annotations:map[string]string{io.kubernetes.container.hash: d84ded9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2a2ea68ffb9c400b6cf9fe38eee3ecee40957dc05b127ebb08c7df6de024e1,PodSandboxId:4bbf1c6c47cb8b71a2d5de3778a1791f8437fa41ca78606c45f45265438eb384,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de
530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709169189662856745,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0d595aa-7503-497b-8719-8a82ca333df3,},Annotations:map[string]string{io.kubernetes.container.hash: 44d11f0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bbd76b157e12126054b522ab7cb2345412bc2a4948f8f4d5eb0d7eed7a47b,PodSandboxId:3e817c0064e886705c681abf7feac7da74cb4fd0eb58a55721456335e0b129be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601
a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709169182098500488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4pcrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb43d6f-14c6-42de-be44-4441b9f518ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3f5c45d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04b3d1fabb914b27d0d0b03918098e702230a9210cfb28a7fac5e69d894e252a,PodSandboxId:169a41
7eb22246e4aea67a98b788053f2ea370de772d4cf139dccba326b2f8a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709169181427274373,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9h94v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86903f1f-0d36-4812-acde-9145f651a025,},Annotations:map[string]string{io.kubernetes.container.hash: e580e67a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e26b9c70186d9761efbc13ad8f6cbbf8e52ab8ce4a433d365f57e3a6f7fefd,PodSandboxId:767a97fffcdcf9dd63fd5bf7280b40d6aa8f4bf
9a7e02349c2eac5e92ca840bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709169161724035962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4b8244708dd77863bdc2940d7ca944,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7c9686,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9425dce69f24a188c47d5add3884b5f287c7e60724dab40e088ac0e0b54c0708,PodSandboxId:d27b0fcd09df0c9a82aaa84162a76b4791efc3e87ac318a51bb39b2d9351b21b,Metadata:&ContainerMe
tadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709169161780781421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dcea44ad8c6fad4c7dcf5c120398c8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5009adb028085947daae1ba7fef2a0f2ae731e3c2e5efeed752043cdc2f9d0ea,PodSandboxId:2d274f82c931eaed86bdb6464b77452b457114fcb9b020f29fb441975e5bbbf8,Metadata:&ContainerMetadata{Name:kube-
apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709169161658848741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754d725e342de23a8503217d677b914c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8918fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4954df1875ea7603adf964a5a13be727dc7fe5a12501c20f689a1fb5d72ecb65,PodSandboxId:bce95c30109346af6bcab530d85e2d136e88ed4a90e8ca8c7cd250bf1c3cacc7,Metadata:&ContainerMetadata{Name:kube-controller-manage
r,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709169161599906726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f40b94ca78b79eee6c772a400b09a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=339fa82e-4db9-47c2-85c7-8a5bb0c11707 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.920653731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98c1472c-15c7-48db-bc38-1326c30abc9f name=/runtime.v1.RuntimeService/Version
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.920728361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98c1472c-15c7-48db-bc38-1326c30abc9f name=/runtime.v1.RuntimeService/Version
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.921869840Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45a12ac2-fe56-4fb5-8e4e-410ddb895e3f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.923267887Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709169456923242724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565686,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45a12ac2-fe56-4fb5-8e4e-410ddb895e3f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.923805594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02ff0b45-7580-463e-83ab-2aa379c77a5c name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.923861948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02ff0b45-7580-463e-83ab-2aa379c77a5c name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.924409357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1661a8c3489e2b745a2bbe51088c95c21380e24bcca18eb2d5755a37c7282737,PodSandboxId:aaa54ac99c030cb1f896f4650d711c7f8264ce9ab4f53695d23568a301c2e2b7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1709169449512699462,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-bfl47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab46e76-6627-4755-8a68-2b5088917ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 28770aac,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed0911b7b784142335a8b34129dc6fda60d717da39302f22481e51ed400f4d1a,PodSandboxId:2a3948bd91a4ab740310e28e7bd15c89c7611b699718c2665074b02d3a0b6788,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3cb09943f099d7eadf10e50e2be686eaa43df402d5e9f3369164bb7f69d8fc79,State:CONTAINER_RUNNING,CreatedAt:1709169333253731662,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-lsqz4,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 86feeed9-5827-47a5-bcb1-f939810036ba,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2f87b897,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bac6187ed3b16ca3f8647be7b34bda8b4488cd9c3a05d0c7b486a144feb3629,PodSandboxId:5d394a514b98f19f9aae6ee22403004a54967a5ff2accb5e0f950441d6c2c043,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709169307569657306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: d714fecf-09b1-4cd0-b639-1b12d34e13b3,},Annotations:map[string]string{io.kubernetes.container.hash: 177a8c47,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e52b89c446a5aca696d9d9c8e0653b2a045ee5d7a02e149e8c7e01d4dd28c7,PodSandboxId:7eec2df001eb22efaa4e1f8b1669a9c7999ad46b0ca394028dca31a71fd34727,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709169272178893814,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-zccgt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0b9bb93b-6a21-45a5-b329-3f0735b3b8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4510e833,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5631fce45e2dfb0a87bbcfe51149a066915c9d85eff925a0d740594f9306be1d,PodSandboxId:aa3621ab1f6d9f4a17122630065f6ae5d9afdee4c711f5218a0541f331ae8551,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTA
INER_EXITED,CreatedAt:1709169255976962086,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rgrk5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: deed00d9-1a9e-46d5-a2ee-8bc5d56d7392,},Annotations:map[string]string{io.kubernetes.container.hash: ae3861f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eee5e70b7463dd497eb1543ae853da62232de16569a1547e45bed4c8d8e0acf,PodSandboxId:fe4b9f54145fd0f1dd97f4e1470f7e7b511b7341f8874f884930f6e28c416712,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562c
b08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169254385825660,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgdfs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4f370dc0-0da1-44ab-bfc6-54766f7b0faa,},Annotations:map[string]string{io.kubernetes.container.hash: 1872fdd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47278bfe07925227df31de5583b4df6361a65ae6493a34035cde22fe653823e,PodSandboxId:8d04fc535e877efdfdab69a7d7f6b76e57a1e72b17f1ef5c3e82cd9ef8d59168,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1709169231358794360,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-qmvcb,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e5aa7bf3-4864-4a99-89f8-7130c9effa51,},Annotations:map[string]string{io.kubernetes.container.hash: 5468fdce,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9704c1a775207dacdf60716c6037d07493e878532b1a09fa2d3fb47d621b818,PodSandboxId:96bf42d01fbe3a627dc33ba845027293cbf1bc383f9927cbef1035e2a5b9425d,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:19c696958fe8a63676ba26fa57114c33149168bbedfea2
46fc55e0e665c03098,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0745b508898e2aa68f29a3c7f21023d03feace165b2430bc2297d250e65009e0,State:CONTAINER_RUNNING,CreatedAt:1709169201576638973,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-qctgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d1f69b-373d-49c1-a1da-9b03d99cc13c,},Annotations:map[string]string{io.kubernetes.container.hash: d84ded9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2a2ea68ffb9c400b6cf9fe38eee3ecee40957dc05b127ebb08c7df6de024e1,PodSandboxId:4bbf1c6c47cb8b71a2d5de3778a1791f8437fa41ca78606c45f45265438eb384,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de
530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709169189662856745,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0d595aa-7503-497b-8719-8a82ca333df3,},Annotations:map[string]string{io.kubernetes.container.hash: 44d11f0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bbd76b157e12126054b522ab7cb2345412bc2a4948f8f4d5eb0d7eed7a47b,PodSandboxId:3e817c0064e886705c681abf7feac7da74cb4fd0eb58a55721456335e0b129be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601
a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709169182098500488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4pcrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb43d6f-14c6-42de-be44-4441b9f518ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3f5c45d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04b3d1fabb914b27d0d0b03918098e702230a9210cfb28a7fac5e69d894e252a,PodSandboxId:169a41
7eb22246e4aea67a98b788053f2ea370de772d4cf139dccba326b2f8a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709169181427274373,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9h94v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86903f1f-0d36-4812-acde-9145f651a025,},Annotations:map[string]string{io.kubernetes.container.hash: e580e67a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e26b9c70186d9761efbc13ad8f6cbbf8e52ab8ce4a433d365f57e3a6f7fefd,PodSandboxId:767a97fffcdcf9dd63fd5bf7280b40d6aa8f4bf
9a7e02349c2eac5e92ca840bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709169161724035962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4b8244708dd77863bdc2940d7ca944,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7c9686,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9425dce69f24a188c47d5add3884b5f287c7e60724dab40e088ac0e0b54c0708,PodSandboxId:d27b0fcd09df0c9a82aaa84162a76b4791efc3e87ac318a51bb39b2d9351b21b,Metadata:&ContainerMe
tadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709169161780781421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dcea44ad8c6fad4c7dcf5c120398c8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5009adb028085947daae1ba7fef2a0f2ae731e3c2e5efeed752043cdc2f9d0ea,PodSandboxId:2d274f82c931eaed86bdb6464b77452b457114fcb9b020f29fb441975e5bbbf8,Metadata:&ContainerMetadata{Name:kube-
apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709169161658848741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754d725e342de23a8503217d677b914c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8918fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4954df1875ea7603adf964a5a13be727dc7fe5a12501c20f689a1fb5d72ecb65,PodSandboxId:bce95c30109346af6bcab530d85e2d136e88ed4a90e8ca8c7cd250bf1c3cacc7,Metadata:&ContainerMetadata{Name:kube-controller-manage
r,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709169161599906726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f40b94ca78b79eee6c772a400b09a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02ff0b45-7580-463e-83ab-2aa379c77a5c name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.963542636Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2dc5c568-2119-4559-a3cf-8407c1966323 name=/runtime.v1.RuntimeService/Version
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.963620705Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2dc5c568-2119-4559-a3cf-8407c1966323 name=/runtime.v1.RuntimeService/Version
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.965516865Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c7acbe8-5582-4326-9ac6-4ae6053d9582 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.966783689Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709169456966750458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565686,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c7acbe8-5582-4326-9ac6-4ae6053d9582 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.967420935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec656798-d609-4170-a622-ef92bb167e2f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.967513658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec656798-d609-4170-a622-ef92bb167e2f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:17:36 addons-600097 crio[679]: time="2024-02-29 01:17:36.967826333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1661a8c3489e2b745a2bbe51088c95c21380e24bcca18eb2d5755a37c7282737,PodSandboxId:aaa54ac99c030cb1f896f4650d711c7f8264ce9ab4f53695d23568a301c2e2b7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1709169449512699462,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-bfl47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab46e76-6627-4755-8a68-2b5088917ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 28770aac,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed0911b7b784142335a8b34129dc6fda60d717da39302f22481e51ed400f4d1a,PodSandboxId:2a3948bd91a4ab740310e28e7bd15c89c7611b699718c2665074b02d3a0b6788,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3cb09943f099d7eadf10e50e2be686eaa43df402d5e9f3369164bb7f69d8fc79,State:CONTAINER_RUNNING,CreatedAt:1709169333253731662,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-lsqz4,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 86feeed9-5827-47a5-bcb1-f939810036ba,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2f87b897,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bac6187ed3b16ca3f8647be7b34bda8b4488cd9c3a05d0c7b486a144feb3629,PodSandboxId:5d394a514b98f19f9aae6ee22403004a54967a5ff2accb5e0f950441d6c2c043,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709169307569657306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: d714fecf-09b1-4cd0-b639-1b12d34e13b3,},Annotations:map[string]string{io.kubernetes.container.hash: 177a8c47,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e52b89c446a5aca696d9d9c8e0653b2a045ee5d7a02e149e8c7e01d4dd28c7,PodSandboxId:7eec2df001eb22efaa4e1f8b1669a9c7999ad46b0ca394028dca31a71fd34727,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709169272178893814,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-zccgt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0b9bb93b-6a21-45a5-b329-3f0735b3b8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4510e833,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5631fce45e2dfb0a87bbcfe51149a066915c9d85eff925a0d740594f9306be1d,PodSandboxId:aa3621ab1f6d9f4a17122630065f6ae5d9afdee4c711f5218a0541f331ae8551,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTA
INER_EXITED,CreatedAt:1709169255976962086,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rgrk5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: deed00d9-1a9e-46d5-a2ee-8bc5d56d7392,},Annotations:map[string]string{io.kubernetes.container.hash: ae3861f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eee5e70b7463dd497eb1543ae853da62232de16569a1547e45bed4c8d8e0acf,PodSandboxId:fe4b9f54145fd0f1dd97f4e1470f7e7b511b7341f8874f884930f6e28c416712,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562c
b08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169254385825660,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgdfs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4f370dc0-0da1-44ab-bfc6-54766f7b0faa,},Annotations:map[string]string{io.kubernetes.container.hash: 1872fdd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47278bfe07925227df31de5583b4df6361a65ae6493a34035cde22fe653823e,PodSandboxId:8d04fc535e877efdfdab69a7d7f6b76e57a1e72b17f1ef5c3e82cd9ef8d59168,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1709169231358794360,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-qmvcb,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e5aa7bf3-4864-4a99-89f8-7130c9effa51,},Annotations:map[string]string{io.kubernetes.container.hash: 5468fdce,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9704c1a775207dacdf60716c6037d07493e878532b1a09fa2d3fb47d621b818,PodSandboxId:96bf42d01fbe3a627dc33ba845027293cbf1bc383f9927cbef1035e2a5b9425d,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:19c696958fe8a63676ba26fa57114c33149168bbedfea2
46fc55e0e665c03098,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0745b508898e2aa68f29a3c7f21023d03feace165b2430bc2297d250e65009e0,State:CONTAINER_RUNNING,CreatedAt:1709169201576638973,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-qctgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d1f69b-373d-49c1-a1da-9b03d99cc13c,},Annotations:map[string]string{io.kubernetes.container.hash: d84ded9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2a2ea68ffb9c400b6cf9fe38eee3ecee40957dc05b127ebb08c7df6de024e1,PodSandboxId:4bbf1c6c47cb8b71a2d5de3778a1791f8437fa41ca78606c45f45265438eb384,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de
530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709169189662856745,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0d595aa-7503-497b-8719-8a82ca333df3,},Annotations:map[string]string{io.kubernetes.container.hash: 44d11f0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bbd76b157e12126054b522ab7cb2345412bc2a4948f8f4d5eb0d7eed7a47b,PodSandboxId:3e817c0064e886705c681abf7feac7da74cb4fd0eb58a55721456335e0b129be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601
a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709169182098500488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4pcrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb43d6f-14c6-42de-be44-4441b9f518ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3f5c45d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04b3d1fabb914b27d0d0b03918098e702230a9210cfb28a7fac5e69d894e252a,PodSandboxId:169a41
7eb22246e4aea67a98b788053f2ea370de772d4cf139dccba326b2f8a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709169181427274373,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9h94v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86903f1f-0d36-4812-acde-9145f651a025,},Annotations:map[string]string{io.kubernetes.container.hash: e580e67a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e26b9c70186d9761efbc13ad8f6cbbf8e52ab8ce4a433d365f57e3a6f7fefd,PodSandboxId:767a97fffcdcf9dd63fd5bf7280b40d6aa8f4bf
9a7e02349c2eac5e92ca840bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709169161724035962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4b8244708dd77863bdc2940d7ca944,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7c9686,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9425dce69f24a188c47d5add3884b5f287c7e60724dab40e088ac0e0b54c0708,PodSandboxId:d27b0fcd09df0c9a82aaa84162a76b4791efc3e87ac318a51bb39b2d9351b21b,Metadata:&ContainerMe
tadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709169161780781421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dcea44ad8c6fad4c7dcf5c120398c8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5009adb028085947daae1ba7fef2a0f2ae731e3c2e5efeed752043cdc2f9d0ea,PodSandboxId:2d274f82c931eaed86bdb6464b77452b457114fcb9b020f29fb441975e5bbbf8,Metadata:&ContainerMetadata{Name:kube-
apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709169161658848741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754d725e342de23a8503217d677b914c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8918fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4954df1875ea7603adf964a5a13be727dc7fe5a12501c20f689a1fb5d72ecb65,PodSandboxId:bce95c30109346af6bcab530d85e2d136e88ed4a90e8ca8c7cd250bf1c3cacc7,Metadata:&ContainerMetadata{Name:kube-controller-manage
r,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709169161599906726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f40b94ca78b79eee6c772a400b09a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec656798-d609-4170-a622-ef92bb167e2f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:17:37 addons-600097 crio[679]: time="2024-02-29 01:17:37.008941855Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=04dc280b-e510-45f8-842a-d7b74978c38c name=/runtime.v1.RuntimeService/Version
	Feb 29 01:17:37 addons-600097 crio[679]: time="2024-02-29 01:17:37.009011871Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=04dc280b-e510-45f8-842a-d7b74978c38c name=/runtime.v1.RuntimeService/Version
	Feb 29 01:17:37 addons-600097 crio[679]: time="2024-02-29 01:17:37.010760534Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f553ecf4-fe15-4dce-b2fc-cd87fbdd906d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:17:37 addons-600097 crio[679]: time="2024-02-29 01:17:37.011985475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709169457011957992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:565686,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f553ecf4-fe15-4dce-b2fc-cd87fbdd906d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:17:37 addons-600097 crio[679]: time="2024-02-29 01:17:37.012883674Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38c471e3-404e-4f0c-94e6-587ea772ae1a name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:17:37 addons-600097 crio[679]: time="2024-02-29 01:17:37.012938641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38c471e3-404e-4f0c-94e6-587ea772ae1a name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:17:37 addons-600097 crio[679]: time="2024-02-29 01:17:37.013609913Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1661a8c3489e2b745a2bbe51088c95c21380e24bcca18eb2d5755a37c7282737,PodSandboxId:aaa54ac99c030cb1f896f4650d711c7f8264ce9ab4f53695d23568a301c2e2b7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1709169449512699462,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-bfl47,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dab46e76-6627-4755-8a68-2b5088917ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 28770aac,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed0911b7b784142335a8b34129dc6fda60d717da39302f22481e51ed400f4d1a,PodSandboxId:2a3948bd91a4ab740310e28e7bd15c89c7611b699718c2665074b02d3a0b6788,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3cb09943f099d7eadf10e50e2be686eaa43df402d5e9f3369164bb7f69d8fc79,State:CONTAINER_RUNNING,CreatedAt:1709169333253731662,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-lsqz4,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 86feeed9-5827-47a5-bcb1-f939810036ba,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2f87b897,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bac6187ed3b16ca3f8647be7b34bda8b4488cd9c3a05d0c7b486a144feb3629,PodSandboxId:5d394a514b98f19f9aae6ee22403004a54967a5ff2accb5e0f950441d6c2c043,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709169307569657306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: d714fecf-09b1-4cd0-b639-1b12d34e13b3,},Annotations:map[string]string{io.kubernetes.container.hash: 177a8c47,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e52b89c446a5aca696d9d9c8e0653b2a045ee5d7a02e149e8c7e01d4dd28c7,PodSandboxId:7eec2df001eb22efaa4e1f8b1669a9c7999ad46b0ca394028dca31a71fd34727,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709169272178893814,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5f6b4f85fd-zccgt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0b9bb93b-6a21-45a5-b329-3f0735b3b8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4510e833,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5631fce45e2dfb0a87bbcfe51149a066915c9d85eff925a0d740594f9306be1d,PodSandboxId:aa3621ab1f6d9f4a17122630065f6ae5d9afdee4c711f5218a0541f331ae8551,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTA
INER_EXITED,CreatedAt:1709169255976962086,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rgrk5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: deed00d9-1a9e-46d5-a2ee-8bc5d56d7392,},Annotations:map[string]string{io.kubernetes.container.hash: ae3861f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eee5e70b7463dd497eb1543ae853da62232de16569a1547e45bed4c8d8e0acf,PodSandboxId:fe4b9f54145fd0f1dd97f4e1470f7e7b511b7341f8874f884930f6e28c416712,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562c
b08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169254385825660,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgdfs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4f370dc0-0da1-44ab-bfc6-54766f7b0faa,},Annotations:map[string]string{io.kubernetes.container.hash: 1872fdd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47278bfe07925227df31de5583b4df6361a65ae6493a34035cde22fe653823e,PodSandboxId:8d04fc535e877efdfdab69a7d7f6b76e57a1e72b17f1ef5c3e82cd9ef8d59168,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1709169231358794360,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-qmvcb,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e5aa7bf3-4864-4a99-89f8-7130c9effa51,},Annotations:map[string]string{io.kubernetes.container.hash: 5468fdce,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9704c1a775207dacdf60716c6037d07493e878532b1a09fa2d3fb47d621b818,PodSandboxId:96bf42d01fbe3a627dc33ba845027293cbf1bc383f9927cbef1035e2a5b9425d,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:19c696958fe8a63676ba26fa57114c33149168bbedfea2
46fc55e0e665c03098,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0745b508898e2aa68f29a3c7f21023d03feace165b2430bc2297d250e65009e0,State:CONTAINER_RUNNING,CreatedAt:1709169201576638973,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-qctgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d1f69b-373d-49c1-a1da-9b03d99cc13c,},Annotations:map[string]string{io.kubernetes.container.hash: d84ded9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2a2ea68ffb9c400b6cf9fe38eee3ecee40957dc05b127ebb08c7df6de024e1,PodSandboxId:4bbf1c6c47cb8b71a2d5de3778a1791f8437fa41ca78606c45f45265438eb384,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de
530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709169189662856745,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0d595aa-7503-497b-8719-8a82ca333df3,},Annotations:map[string]string{io.kubernetes.container.hash: 44d11f0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bbd76b157e12126054b522ab7cb2345412bc2a4948f8f4d5eb0d7eed7a47b,PodSandboxId:3e817c0064e886705c681abf7feac7da74cb4fd0eb58a55721456335e0b129be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601
a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709169182098500488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4pcrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb43d6f-14c6-42de-be44-4441b9f518ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3f5c45d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04b3d1fabb914b27d0d0b03918098e702230a9210cfb28a7fac5e69d894e252a,PodSandboxId:169a41
7eb22246e4aea67a98b788053f2ea370de772d4cf139dccba326b2f8a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709169181427274373,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9h94v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86903f1f-0d36-4812-acde-9145f651a025,},Annotations:map[string]string{io.kubernetes.container.hash: e580e67a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e26b9c70186d9761efbc13ad8f6cbbf8e52ab8ce4a433d365f57e3a6f7fefd,PodSandboxId:767a97fffcdcf9dd63fd5bf7280b40d6aa8f4bf
9a7e02349c2eac5e92ca840bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709169161724035962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4b8244708dd77863bdc2940d7ca944,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7c9686,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9425dce69f24a188c47d5add3884b5f287c7e60724dab40e088ac0e0b54c0708,PodSandboxId:d27b0fcd09df0c9a82aaa84162a76b4791efc3e87ac318a51bb39b2d9351b21b,Metadata:&ContainerMe
tadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709169161780781421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dcea44ad8c6fad4c7dcf5c120398c8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5009adb028085947daae1ba7fef2a0f2ae731e3c2e5efeed752043cdc2f9d0ea,PodSandboxId:2d274f82c931eaed86bdb6464b77452b457114fcb9b020f29fb441975e5bbbf8,Metadata:&ContainerMetadata{Name:kube-
apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709169161658848741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754d725e342de23a8503217d677b914c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8918fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4954df1875ea7603adf964a5a13be727dc7fe5a12501c20f689a1fb5d72ecb65,PodSandboxId:bce95c30109346af6bcab530d85e2d136e88ed4a90e8ca8c7cd250bf1c3cacc7,Metadata:&ContainerMetadata{Name:kube-controller-manage
r,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709169161599906726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f40b94ca78b79eee6c772a400b09a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38c471e3-404e-4f0c-94e6-587ea772ae1a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	1661a8c3489e2       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app            0                   aaa54ac99c030       hello-world-app-5d77478584-bfl47
	ed0911b7b7841       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                   0                   2a3948bd91a4a       headlamp-7ddfbb94ff-lsqz4
	2bac6187ed3b1       docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                              2 minutes ago       Running             nginx                      0                   5d394a514b98f       nginx
	f6e52b89c446a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                 3 minutes ago       Running             gcp-auth                   0                   7eec2df001eb2       gcp-auth-5f6b4f85fd-zccgt
	5631fce45e2df       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e   3 minutes ago       Exited              patch                      0                   aa3621ab1f6d9       ingress-nginx-admission-patch-rgrk5
	2eee5e70b7463       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e   3 minutes ago       Exited              create                     0                   fe4b9f54145fd       ingress-nginx-admission-create-sgdfs
	d47278bfe0792       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                       0                   8d04fc535e877       yakd-dashboard-9947fc6bf-qmvcb
	b9704c1a77520       nvcr.io/nvidia/k8s-device-plugin@sha256:19c696958fe8a63676ba26fa57114c33149168bbedfea246fc55e0e665c03098                     4 minutes ago       Running             nvidia-device-plugin-ctr   0                   96bf42d01fbe3       nvidia-device-plugin-daemonset-qctgj
	8a2a2ea68ffb9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner        0                   4bbf1c6c47cb8       storage-provisioner
	212bbd76b157e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                    0                   3e817c0064e88       coredns-5dd5756b68-4pcrt
	04b3d1fabb914       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                 0                   169a417eb2224       kube-proxy-9h94v
	9425dce69f24a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler             0                   d27b0fcd09df0       kube-scheduler-addons-600097
	f6e26b9c70186       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                       0                   767a97fffcdcf       etcd-addons-600097
	5009adb028085       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver             0                   2d274f82c931e       kube-apiserver-addons-600097
	4954df1875ea7       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager    0                   bce95c3010934       kube-controller-manager-addons-600097
	
	
	==> coredns [212bbd76b157e12126054b522ab7cb2345412bc2a4948f8f4d5eb0d7eed7a47b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:56750 - 50252 "HINFO IN 6012791314514810438.1269741809087642760. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021058283s
	
	
	==> describe nodes <==
	Name:               addons-600097
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-600097
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=addons-600097
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T01_12_47_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-600097
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 01:12:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-600097
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 01:17:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 01:15:51 +0000   Thu, 29 Feb 2024 01:12:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 01:15:51 +0000   Thu, 29 Feb 2024 01:12:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 01:15:51 +0000   Thu, 29 Feb 2024 01:12:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 01:15:51 +0000   Thu, 29 Feb 2024 01:12:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    addons-600097
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a38fcc474c74a92a46fb62fa427ef29
	  System UUID:                3a38fcc4-74c7-4a92-a46f-b62fa427ef29
	  Boot ID:                    cd993e19-ae3b-4564-887c-3d6000ae6b48
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-bfl47         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-5f6b4f85fd-zccgt                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  headlamp                    headlamp-7ddfbb94ff-lsqz4                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 coredns-5dd5756b68-4pcrt                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m37s
	  kube-system                 etcd-addons-600097                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-apiserver-addons-600097             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-controller-manager-addons-600097    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-proxy-9h94v                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	  kube-system                 kube-scheduler-addons-600097             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 nvidia-device-plugin-daemonset-qctgj     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-qmvcb           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m35s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m56s)  kubelet          Node addons-600097 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m56s)  kubelet          Node addons-600097 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m56s)  kubelet          Node addons-600097 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m50s                  kubelet          Node addons-600097 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m50s                  kubelet          Node addons-600097 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m50s                  kubelet          Node addons-600097 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m49s                  kubelet          Node addons-600097 status is now: NodeReady
	  Normal  RegisteredNode           4m37s                  node-controller  Node addons-600097 event: Registered Node addons-600097 in Controller
	
	
	==> dmesg <==
	[  +0.591015] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[Feb29 01:13] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.345641] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.669037] kauditd_printk_skb: 134 callbacks suppressed
	[  +9.392475] kauditd_printk_skb: 66 callbacks suppressed
	[  +8.524232] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.722632] kauditd_printk_skb: 2 callbacks suppressed
	[Feb29 01:14] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.845597] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.827795] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.208770] kauditd_printk_skb: 89 callbacks suppressed
	[ +11.161226] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.324237] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.952838] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.069756] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.110661] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.070531] kauditd_printk_skb: 32 callbacks suppressed
	[Feb29 01:15] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.700025] kauditd_printk_skb: 1 callbacks suppressed
	[ +14.001421] kauditd_printk_skb: 1 callbacks suppressed
	[  +8.814480] kauditd_printk_skb: 10 callbacks suppressed
	[ +15.617372] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.439728] kauditd_printk_skb: 25 callbacks suppressed
	[Feb29 01:17] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.067671] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [f6e26b9c70186d9761efbc13ad8f6cbbf8e52ab8ce4a433d365f57e3a6f7fefd] <==
	{"level":"warn","ts":"2024-02-29T01:13:36.341315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.314191ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10580"}
	{"level":"info","ts":"2024-02-29T01:13:36.341407Z","caller":"traceutil/trace.go:171","msg":"trace[131999217] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:937; }","duration":"244.435837ms","start":"2024-02-29T01:13:36.096962Z","end":"2024-02-29T01:13:36.341397Z","steps":["trace[131999217] 'agreement among raft nodes before linearized reading'  (duration: 244.26436ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:13:36.341641Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.353341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81411"}
	{"level":"info","ts":"2024-02-29T01:13:36.341688Z","caller":"traceutil/trace.go:171","msg":"trace[878180946] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:937; }","duration":"102.402549ms","start":"2024-02-29T01:13:36.239279Z","end":"2024-02-29T01:13:36.341681Z","steps":["trace[878180946] 'agreement among raft nodes before linearized reading'  (duration: 102.268868ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:13:41.175628Z","caller":"traceutil/trace.go:171","msg":"trace[1085054370] transaction","detail":"{read_only:false; response_revision:953; number_of_response:1; }","duration":"182.80404ms","start":"2024-02-29T01:13:40.992812Z","end":"2024-02-29T01:13:41.175616Z","steps":["trace[1085054370] 'process raft request'  (duration: 182.479515ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:13:57.592875Z","caller":"traceutil/trace.go:171","msg":"trace[695287154] transaction","detail":"{read_only:false; response_revision:979; number_of_response:1; }","duration":"254.347028ms","start":"2024-02-29T01:13:57.338514Z","end":"2024-02-29T01:13:57.592861Z","steps":["trace[695287154] 'process raft request'  (duration: 254.111446ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:14:03.806798Z","caller":"traceutil/trace.go:171","msg":"trace[956647215] transaction","detail":"{read_only:false; response_revision:988; number_of_response:1; }","duration":"184.914575ms","start":"2024-02-29T01:14:03.621834Z","end":"2024-02-29T01:14:03.806749Z","steps":["trace[956647215] 'process raft request'  (duration: 184.633085ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:14:11.171588Z","caller":"traceutil/trace.go:171","msg":"trace[820998164] transaction","detail":"{read_only:false; response_revision:1031; number_of_response:1; }","duration":"146.211444ms","start":"2024-02-29T01:14:11.025354Z","end":"2024-02-29T01:14:11.171565Z","steps":["trace[820998164] 'process raft request'  (duration: 144.604827ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:14:28.181377Z","caller":"traceutil/trace.go:171","msg":"trace[855275724] linearizableReadLoop","detail":"{readStateIndex:1177; appliedIndex:1176; }","duration":"190.20134ms","start":"2024-02-29T01:14:27.991161Z","end":"2024-02-29T01:14:28.181362Z","steps":["trace[855275724] 'read index received'  (duration: 189.348974ms)","trace[855275724] 'applied index is now lower than readState.Index'  (duration: 851.804µs)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T01:14:28.181832Z","caller":"traceutil/trace.go:171","msg":"trace[2044153138] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"214.613213ms","start":"2024-02-29T01:14:27.967133Z","end":"2024-02-29T01:14:28.181747Z","steps":["trace[2044153138] 'process raft request'  (duration: 213.092318ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:14:28.182408Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.347383ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T01:14:28.184282Z","caller":"traceutil/trace.go:171","msg":"trace[853152572] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1139; }","duration":"193.238667ms","start":"2024-02-29T01:14:27.991032Z","end":"2024-02-29T01:14:28.18427Z","steps":["trace[853152572] 'agreement among raft nodes before linearized reading'  (duration: 190.798593ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:14:28.183477Z","caller":"traceutil/trace.go:171","msg":"trace[52740345] transaction","detail":"{read_only:false; response_revision:1140; number_of_response:1; }","duration":"115.812748ms","start":"2024-02-29T01:14:28.067657Z","end":"2024-02-29T01:14:28.183469Z","steps":["trace[52740345] 'process raft request'  (duration: 115.54734ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:14:36.304567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"395.876262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T01:14:36.304642Z","caller":"traceutil/trace.go:171","msg":"trace[950095716] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1182; }","duration":"395.965553ms","start":"2024-02-29T01:14:35.908666Z","end":"2024-02-29T01:14:36.304631Z","steps":["trace[950095716] 'range keys from in-memory index tree'  (duration: 395.747083ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:14:36.304673Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T01:14:35.908651Z","time spent":"396.014626ms","remote":"127.0.0.1:60152","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-02-29T01:14:36.304778Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.127094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.181\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-02-29T01:14:36.304836Z","caller":"traceutil/trace.go:171","msg":"trace[990397485] range","detail":"{range_begin:/registry/masterleases/192.168.39.181; range_end:; response_count:1; response_revision:1182; }","duration":"213.22719ms","start":"2024-02-29T01:14:36.0916Z","end":"2024-02-29T01:14:36.304827Z","steps":["trace[990397485] 'range keys from in-memory index tree'  (duration: 212.939612ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:15:16.354991Z","caller":"traceutil/trace.go:171","msg":"trace[592789921] linearizableReadLoop","detail":"{readStateIndex:1574; appliedIndex:1573; }","duration":"182.05642ms","start":"2024-02-29T01:15:16.172911Z","end":"2024-02-29T01:15:16.354968Z","steps":["trace[592789921] 'read index received'  (duration: 181.931661ms)","trace[592789921] 'applied index is now lower than readState.Index'  (duration: 124.116µs)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T01:15:16.3552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.278647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5607"}
	{"level":"info","ts":"2024-02-29T01:15:16.355223Z","caller":"traceutil/trace.go:171","msg":"trace[748801209] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1518; }","duration":"182.329702ms","start":"2024-02-29T01:15:16.172887Z","end":"2024-02-29T01:15:16.355217Z","steps":["trace[748801209] 'agreement among raft nodes before linearized reading'  (duration: 182.233422ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:16:07.120796Z","caller":"traceutil/trace.go:171","msg":"trace[2078308137] linearizableReadLoop","detail":"{readStateIndex:1869; appliedIndex:1868; }","duration":"212.340262ms","start":"2024-02-29T01:16:06.908426Z","end":"2024-02-29T01:16:07.120766Z","steps":["trace[2078308137] 'read index received'  (duration: 212.193785ms)","trace[2078308137] 'applied index is now lower than readState.Index'  (duration: 146.085µs)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T01:16:07.120992Z","caller":"traceutil/trace.go:171","msg":"trace[703459742] transaction","detail":"{read_only:false; response_revision:1799; number_of_response:1; }","duration":"228.172586ms","start":"2024-02-29T01:16:06.892797Z","end":"2024-02-29T01:16:07.12097Z","steps":["trace[703459742] 'process raft request'  (duration: 227.865739ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:16:07.1212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.641243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T01:16:07.121416Z","caller":"traceutil/trace.go:171","msg":"trace[1829130150] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1799; }","duration":"213.007537ms","start":"2024-02-29T01:16:06.908399Z","end":"2024-02-29T01:16:07.121406Z","steps":["trace[1829130150] 'agreement among raft nodes before linearized reading'  (duration: 212.625126ms)"],"step_count":1}
	
	
	==> gcp-auth [f6e52b89c446a5aca696d9d9c8e0653b2a045ee5d7a02e149e8c7e01d4dd28c7] <==
	2024/02/29 01:14:32 GCP Auth Webhook started!
	2024/02/29 01:14:38 Ready to marshal response ...
	2024/02/29 01:14:38 Ready to write response ...
	2024/02/29 01:14:38 Ready to marshal response ...
	2024/02/29 01:14:38 Ready to write response ...
	2024/02/29 01:14:48 Ready to marshal response ...
	2024/02/29 01:14:48 Ready to write response ...
	2024/02/29 01:14:49 Ready to marshal response ...
	2024/02/29 01:14:49 Ready to write response ...
	2024/02/29 01:14:50 Ready to marshal response ...
	2024/02/29 01:14:50 Ready to write response ...
	2024/02/29 01:15:03 Ready to marshal response ...
	2024/02/29 01:15:03 Ready to write response ...
	2024/02/29 01:15:10 Ready to marshal response ...
	2024/02/29 01:15:10 Ready to write response ...
	2024/02/29 01:15:25 Ready to marshal response ...
	2024/02/29 01:15:25 Ready to write response ...
	2024/02/29 01:15:25 Ready to marshal response ...
	2024/02/29 01:15:25 Ready to write response ...
	2024/02/29 01:15:25 Ready to marshal response ...
	2024/02/29 01:15:25 Ready to write response ...
	2024/02/29 01:15:39 Ready to marshal response ...
	2024/02/29 01:15:39 Ready to write response ...
	2024/02/29 01:17:25 Ready to marshal response ...
	2024/02/29 01:17:25 Ready to write response ...
	
	
	==> kernel <==
	 01:17:37 up 5 min,  0 users,  load average: 0.52, 1.20, 0.63
	Linux addons-600097 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5009adb028085947daae1ba7fef2a0f2ae731e3c2e5efeed752043cdc2f9d0ea] <==
	I0229 01:15:05.362373       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0229 01:15:05.368792       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0229 01:15:05.784290       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	W0229 01:15:06.403040       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0229 01:15:23.261599       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0229 01:15:25.190990       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.130.247"}
	I0229 01:15:26.016441       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0229 01:15:57.077992       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 01:15:57.078040       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 01:15:57.090916       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 01:15:57.091035       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 01:15:57.112772       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 01:15:57.112848       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 01:15:57.128848       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 01:15:57.128929       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 01:15:57.131670       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 01:15:57.131736       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 01:15:57.144746       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 01:15:57.144807       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0229 01:15:57.170480       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0229 01:15:57.170534       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0229 01:15:58.131919       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0229 01:15:58.171234       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0229 01:15:58.187356       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0229 01:17:25.985456       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.157.190"}
	
	
	==> kube-controller-manager [4954df1875ea7603adf964a5a13be727dc7fe5a12501c20f689a1fb5d72ecb65] <==
	W0229 01:16:35.925481       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 01:16:35.925541       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 01:16:36.225632       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 01:16:36.225739       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 01:16:39.077640       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 01:16:39.077743       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 01:17:02.177579       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 01:17:02.177637       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 01:17:06.050524       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 01:17:06.050673       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 01:17:12.150214       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 01:17:12.150348       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 01:17:18.346824       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 01:17:18.346881       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0229 01:17:25.710870       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0229 01:17:25.759995       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-bfl47"
	I0229 01:17:25.771552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.217669ms"
	I0229 01:17:25.814741       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.05394ms"
	I0229 01:17:25.814843       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.023µs"
	I0229 01:17:25.821507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.104µs"
	I0229 01:17:29.118917       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0229 01:17:29.125759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="3.949µs"
	I0229 01:17:29.135217       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0229 01:17:30.054891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="11.217834ms"
	I0229 01:17:30.055284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="106.201µs"
	
	
	==> kube-proxy [04b3d1fabb914b27d0d0b03918098e702230a9210cfb28a7fac5e69d894e252a] <==
	I0229 01:13:02.098829       1 server_others.go:69] "Using iptables proxy"
	I0229 01:13:02.113542       1 node.go:141] Successfully retrieved node IP: 192.168.39.181
	I0229 01:13:02.217460       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 01:13:02.217505       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 01:13:02.222723       1 server_others.go:152] "Using iptables Proxier"
	I0229 01:13:02.222781       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 01:13:02.223140       1 server.go:846] "Version info" version="v1.28.4"
	I0229 01:13:02.223176       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 01:13:02.224555       1 config.go:188] "Starting service config controller"
	I0229 01:13:02.224595       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 01:13:02.224613       1 config.go:97] "Starting endpoint slice config controller"
	I0229 01:13:02.224617       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 01:13:02.224897       1 config.go:315] "Starting node config controller"
	I0229 01:13:02.224937       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 01:13:02.325583       1 shared_informer.go:318] Caches are synced for node config
	I0229 01:13:02.325629       1 shared_informer.go:318] Caches are synced for service config
	I0229 01:13:02.325651       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9425dce69f24a188c47d5add3884b5f287c7e60724dab40e088ac0e0b54c0708] <==
	E0229 01:12:44.571255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 01:12:44.570696       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 01:12:44.571336       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 01:12:44.570862       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 01:12:44.571417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 01:12:44.571446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 01:12:44.573371       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 01:12:44.573414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 01:12:45.394265       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 01:12:45.394357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 01:12:45.398042       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 01:12:45.398181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 01:12:45.402679       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 01:12:45.403661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 01:12:45.460352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 01:12:45.460473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 01:12:45.491939       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 01:12:45.492151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 01:12:45.496901       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 01:12:45.499327       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 01:12:45.634009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 01:12:45.634179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 01:12:45.637523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 01:12:45.638156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0229 01:12:46.062906       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 01:17:25 addons-600097 kubelet[1217]: I0229 01:17:25.775687    1217 memory_manager.go:346] "RemoveStaleState removing state" podUID="d8ff48fd-0803-4e5a-8d3d-71b3c9399207" containerName="node-driver-registrar"
	Feb 29 01:17:25 addons-600097 kubelet[1217]: I0229 01:17:25.775693    1217 memory_manager.go:346] "RemoveStaleState removing state" podUID="d8ff48fd-0803-4e5a-8d3d-71b3c9399207" containerName="csi-snapshotter"
	Feb 29 01:17:25 addons-600097 kubelet[1217]: I0229 01:17:25.838757    1217 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdcd9\" (UniqueName: \"kubernetes.io/projected/dab46e76-6627-4755-8a68-2b5088917ea4-kube-api-access-hdcd9\") pod \"hello-world-app-5d77478584-bfl47\" (UID: \"dab46e76-6627-4755-8a68-2b5088917ea4\") " pod="default/hello-world-app-5d77478584-bfl47"
	Feb 29 01:17:25 addons-600097 kubelet[1217]: I0229 01:17:25.838901    1217 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/dab46e76-6627-4755-8a68-2b5088917ea4-gcp-creds\") pod \"hello-world-app-5d77478584-bfl47\" (UID: \"dab46e76-6627-4755-8a68-2b5088917ea4\") " pod="default/hello-world-app-5d77478584-bfl47"
	Feb 29 01:17:27 addons-600097 kubelet[1217]: I0229 01:17:27.152292    1217 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxvwg\" (UniqueName: \"kubernetes.io/projected/bd4a21c2-8e95-404a-a7db-ee307a4d8899-kube-api-access-lxvwg\") pod \"bd4a21c2-8e95-404a-a7db-ee307a4d8899\" (UID: \"bd4a21c2-8e95-404a-a7db-ee307a4d8899\") "
	Feb 29 01:17:27 addons-600097 kubelet[1217]: I0229 01:17:27.154772    1217 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd4a21c2-8e95-404a-a7db-ee307a4d8899-kube-api-access-lxvwg" (OuterVolumeSpecName: "kube-api-access-lxvwg") pod "bd4a21c2-8e95-404a-a7db-ee307a4d8899" (UID: "bd4a21c2-8e95-404a-a7db-ee307a4d8899"). InnerVolumeSpecName "kube-api-access-lxvwg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 29 01:17:27 addons-600097 kubelet[1217]: I0229 01:17:27.253403    1217 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lxvwg\" (UniqueName: \"kubernetes.io/projected/bd4a21c2-8e95-404a-a7db-ee307a4d8899-kube-api-access-lxvwg\") on node \"addons-600097\" DevicePath \"\""
	Feb 29 01:17:28 addons-600097 kubelet[1217]: I0229 01:17:28.005351    1217 scope.go:117] "RemoveContainer" containerID="d4fe2a0e225aec9fc359a02e6279b71005f718984b09c8284fab5586596b78a2"
	Feb 29 01:17:28 addons-600097 kubelet[1217]: I0229 01:17:28.054200    1217 scope.go:117] "RemoveContainer" containerID="d4fe2a0e225aec9fc359a02e6279b71005f718984b09c8284fab5586596b78a2"
	Feb 29 01:17:28 addons-600097 kubelet[1217]: E0229 01:17:28.054858    1217 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4fe2a0e225aec9fc359a02e6279b71005f718984b09c8284fab5586596b78a2\": container with ID starting with d4fe2a0e225aec9fc359a02e6279b71005f718984b09c8284fab5586596b78a2 not found: ID does not exist" containerID="d4fe2a0e225aec9fc359a02e6279b71005f718984b09c8284fab5586596b78a2"
	Feb 29 01:17:28 addons-600097 kubelet[1217]: I0229 01:17:28.054935    1217 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4fe2a0e225aec9fc359a02e6279b71005f718984b09c8284fab5586596b78a2"} err="failed to get container status \"d4fe2a0e225aec9fc359a02e6279b71005f718984b09c8284fab5586596b78a2\": rpc error: code = NotFound desc = could not find container \"d4fe2a0e225aec9fc359a02e6279b71005f718984b09c8284fab5586596b78a2\": container with ID starting with d4fe2a0e225aec9fc359a02e6279b71005f718984b09c8284fab5586596b78a2 not found: ID does not exist"
	Feb 29 01:17:29 addons-600097 kubelet[1217]: I0229 01:17:29.864960    1217 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4f370dc0-0da1-44ab-bfc6-54766f7b0faa" path="/var/lib/kubelet/pods/4f370dc0-0da1-44ab-bfc6-54766f7b0faa/volumes"
	Feb 29 01:17:29 addons-600097 kubelet[1217]: I0229 01:17:29.865859    1217 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bd4a21c2-8e95-404a-a7db-ee307a4d8899" path="/var/lib/kubelet/pods/bd4a21c2-8e95-404a-a7db-ee307a4d8899/volumes"
	Feb 29 01:17:29 addons-600097 kubelet[1217]: I0229 01:17:29.866393    1217 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="deed00d9-1a9e-46d5-a2ee-8bc5d56d7392" path="/var/lib/kubelet/pods/deed00d9-1a9e-46d5-a2ee-8bc5d56d7392/volumes"
	Feb 29 01:17:32 addons-600097 kubelet[1217]: I0229 01:17:32.506155    1217 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9b37e4c-eb54-46a7-8758-c79789004c90-webhook-cert\") pod \"f9b37e4c-eb54-46a7-8758-c79789004c90\" (UID: \"f9b37e4c-eb54-46a7-8758-c79789004c90\") "
	Feb 29 01:17:32 addons-600097 kubelet[1217]: I0229 01:17:32.506222    1217 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7txpp\" (UniqueName: \"kubernetes.io/projected/f9b37e4c-eb54-46a7-8758-c79789004c90-kube-api-access-7txpp\") pod \"f9b37e4c-eb54-46a7-8758-c79789004c90\" (UID: \"f9b37e4c-eb54-46a7-8758-c79789004c90\") "
	Feb 29 01:17:32 addons-600097 kubelet[1217]: I0229 01:17:32.509238    1217 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b37e4c-eb54-46a7-8758-c79789004c90-kube-api-access-7txpp" (OuterVolumeSpecName: "kube-api-access-7txpp") pod "f9b37e4c-eb54-46a7-8758-c79789004c90" (UID: "f9b37e4c-eb54-46a7-8758-c79789004c90"). InnerVolumeSpecName "kube-api-access-7txpp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 29 01:17:32 addons-600097 kubelet[1217]: I0229 01:17:32.512636    1217 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b37e4c-eb54-46a7-8758-c79789004c90-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f9b37e4c-eb54-46a7-8758-c79789004c90" (UID: "f9b37e4c-eb54-46a7-8758-c79789004c90"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 29 01:17:32 addons-600097 kubelet[1217]: I0229 01:17:32.607191    1217 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9b37e4c-eb54-46a7-8758-c79789004c90-webhook-cert\") on node \"addons-600097\" DevicePath \"\""
	Feb 29 01:17:32 addons-600097 kubelet[1217]: I0229 01:17:32.607225    1217 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7txpp\" (UniqueName: \"kubernetes.io/projected/f9b37e4c-eb54-46a7-8758-c79789004c90-kube-api-access-7txpp\") on node \"addons-600097\" DevicePath \"\""
	Feb 29 01:17:33 addons-600097 kubelet[1217]: I0229 01:17:33.048408    1217 scope.go:117] "RemoveContainer" containerID="dc89ee1505e8df19d546303a482e04335a40a13fa88c957799de90cfcd94277b"
	Feb 29 01:17:33 addons-600097 kubelet[1217]: I0229 01:17:33.072195    1217 scope.go:117] "RemoveContainer" containerID="dc89ee1505e8df19d546303a482e04335a40a13fa88c957799de90cfcd94277b"
	Feb 29 01:17:33 addons-600097 kubelet[1217]: E0229 01:17:33.072892    1217 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc89ee1505e8df19d546303a482e04335a40a13fa88c957799de90cfcd94277b\": container with ID starting with dc89ee1505e8df19d546303a482e04335a40a13fa88c957799de90cfcd94277b not found: ID does not exist" containerID="dc89ee1505e8df19d546303a482e04335a40a13fa88c957799de90cfcd94277b"
	Feb 29 01:17:33 addons-600097 kubelet[1217]: I0229 01:17:33.072954    1217 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc89ee1505e8df19d546303a482e04335a40a13fa88c957799de90cfcd94277b"} err="failed to get container status \"dc89ee1505e8df19d546303a482e04335a40a13fa88c957799de90cfcd94277b\": rpc error: code = NotFound desc = could not find container \"dc89ee1505e8df19d546303a482e04335a40a13fa88c957799de90cfcd94277b\": container with ID starting with dc89ee1505e8df19d546303a482e04335a40a13fa88c957799de90cfcd94277b not found: ID does not exist"
	Feb 29 01:17:33 addons-600097 kubelet[1217]: I0229 01:17:33.864204    1217 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f9b37e4c-eb54-46a7-8758-c79789004c90" path="/var/lib/kubelet/pods/f9b37e4c-eb54-46a7-8758-c79789004c90/volumes"
	
	
	==> storage-provisioner [8a2a2ea68ffb9c400b6cf9fe38eee3ecee40957dc05b127ebb08c7df6de024e1] <==
	I0229 01:13:10.702347       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 01:13:10.716643       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 01:13:10.716717       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 01:13:10.740791       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 01:13:10.747026       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-600097_0a298beb-c23d-4fb0-80b7-c71bf445f0b7!
	I0229 01:13:10.747664       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"76da4364-c960-4fa6-810a-ee2c399f8169", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-600097_0a298beb-c23d-4fb0-80b7-c71bf445f0b7 became leader
	I0229 01:13:10.851269       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-600097_0a298beb-c23d-4fb0-80b7-c71bf445f0b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-600097 -n addons-600097
helpers_test.go:261: (dbg) Run:  kubectl --context addons-600097 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.77s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qctgj" [a6d1f69b-373d-49c1-a1da-9b03d99cc13c] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005242765s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-600097
addons_test.go:955: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-600097: exit status 11 (359.607191ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-29T01:15:21Z" level=error msg="stat /run/runc/5fac829763bcb1228e6a505269f1f56f147d1d4a039464b89262e974fc4836a1: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:956: failed to disable nvidia-device-plugin: args "out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-600097" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-600097 -n addons-600097
helpers_test.go:244: <<< TestAddons/parallel/NvidiaDevicePlugin FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/NvidiaDevicePlugin]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-600097 logs -n 25: (1.523967567s)
helpers_test.go:252: TestAddons/parallel/NvidiaDevicePlugin logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-425270                                                                     | download-only-425270 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| start   | -o=json --download-only                                                                     | download-only-057025 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC |                     |
	|         | -p download-only-057025                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-057025                                                                     | download-only-057025 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| start   | -o=json --download-only                                                                     | download-only-561532 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC |                     |
	|         | -p download-only-561532                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-561532                                                                     | download-only-561532 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-425270                                                                     | download-only-425270 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-057025                                                                     | download-only-057025 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-561532                                                                     | download-only-561532 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-801156 | jenkins | v1.32.0 | 29 Feb 24 01:12 UTC |                     |
	|         | binary-mirror-801156                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39823                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-801156                                                                     | binary-mirror-801156 | jenkins | v1.32.0 | 29 Feb 24 01:12 UTC | 29 Feb 24 01:12 UTC |
	| addons  | enable dashboard -p                                                                         | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:12 UTC |                     |
	|         | addons-600097                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:12 UTC |                     |
	|         | addons-600097                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-600097 --wait=true                                                                | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:12 UTC | 29 Feb 24 01:14 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:14 UTC | 29 Feb 24 01:14 UTC |
	|         | addons-600097                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-600097 ssh cat                                                                       | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:14 UTC | 29 Feb 24 01:14 UTC |
	|         | /opt/local-path-provisioner/pvc-46cdb420-a06c-4c86-b1c5-0196b03f5f20_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-600097 addons disable                                                                | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:14 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-600097 ip                                                                            | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:14 UTC | 29 Feb 24 01:14 UTC |
	| addons  | addons-600097 addons disable                                                                | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:14 UTC | 29 Feb 24 01:14 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-600097 addons disable                                                                | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:14 UTC | 29 Feb 24 01:14 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-600097 addons                                                                        | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:15 UTC | 29 Feb 24 01:15 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:15 UTC | 29 Feb 24 01:15 UTC |
	|         | addons-600097                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-600097 ssh curl -s                                                                   | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:15 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-600097        | jenkins | v1.32.0 | 29 Feb 24 01:15 UTC |                     |
	|         | -p addons-600097                                                                            |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:12:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:12:00.561663  324746 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:12:00.561761  324746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:12:00.561773  324746 out.go:304] Setting ErrFile to fd 2...
	I0229 01:12:00.561777  324746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:12:00.561988  324746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 01:12:00.562647  324746 out.go:298] Setting JSON to false
	I0229 01:12:00.563639  324746 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3264,"bootTime":1709165857,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:12:00.563713  324746 start.go:139] virtualization: kvm guest
	I0229 01:12:00.565707  324746 out.go:177] * [addons-600097] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:12:00.567009  324746 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:12:00.567034  324746 notify.go:220] Checking for updates...
	I0229 01:12:00.568400  324746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:12:00.569675  324746 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:12:00.570788  324746 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:12:00.571930  324746 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:12:00.572967  324746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:12:00.574209  324746 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:12:00.605793  324746 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 01:12:00.606889  324746 start.go:299] selected driver: kvm2
	I0229 01:12:00.606903  324746 start.go:903] validating driver "kvm2" against <nil>
	I0229 01:12:00.606915  324746 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:12:00.607606  324746 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:12:00.607700  324746 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:12:00.622814  324746 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:12:00.622869  324746 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:12:00.623101  324746 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 01:12:00.623191  324746 cni.go:84] Creating CNI manager for ""
	I0229 01:12:00.623207  324746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 01:12:00.623215  324746 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 01:12:00.623229  324746 start_flags.go:323] config:
	{Name:addons-600097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-600097 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:12:00.623423  324746 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:12:00.625069  324746 out.go:177] * Starting control plane node addons-600097 in cluster addons-600097
	I0229 01:12:00.626308  324746 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 01:12:00.626348  324746 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 01:12:00.626363  324746 cache.go:56] Caching tarball of preloaded images
	I0229 01:12:00.626465  324746 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 01:12:00.626477  324746 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 01:12:00.626777  324746 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/config.json ...
	I0229 01:12:00.626796  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/config.json: {Name:mk2e96a395af39f7672aec4cced3cd5fe3b7734b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:00.626930  324746 start.go:365] acquiring machines lock for addons-600097: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:12:00.626972  324746 start.go:369] acquired machines lock for "addons-600097" in 29.14µs
	I0229 01:12:00.626988  324746 start.go:93] Provisioning new machine with config: &{Name:addons-600097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-600097 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 01:12:00.627041  324746 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 01:12:00.628620  324746 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0229 01:12:00.628756  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:12:00.628800  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:12:00.643045  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33601
	I0229 01:12:00.643521  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:12:00.644120  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:12:00.644143  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:12:00.644472  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:12:00.644674  324746 main.go:141] libmachine: (addons-600097) Calling .GetMachineName
	I0229 01:12:00.644821  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:00.644962  324746 start.go:159] libmachine.API.Create for "addons-600097" (driver="kvm2")
	I0229 01:12:00.645006  324746 client.go:168] LocalClient.Create starting
	I0229 01:12:00.645049  324746 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem
	I0229 01:12:00.818492  324746 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem
	I0229 01:12:00.931117  324746 main.go:141] libmachine: Running pre-create checks...
	I0229 01:12:00.931147  324746 main.go:141] libmachine: (addons-600097) Calling .PreCreateCheck
	I0229 01:12:00.931722  324746 main.go:141] libmachine: (addons-600097) Calling .GetConfigRaw
	I0229 01:12:00.932176  324746 main.go:141] libmachine: Creating machine...
	I0229 01:12:00.932194  324746 main.go:141] libmachine: (addons-600097) Calling .Create
	I0229 01:12:00.932318  324746 main.go:141] libmachine: (addons-600097) Creating KVM machine...
	I0229 01:12:00.933659  324746 main.go:141] libmachine: (addons-600097) DBG | found existing default KVM network
	I0229 01:12:00.934422  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:00.934283  324768 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0229 01:12:00.939592  324746 main.go:141] libmachine: (addons-600097) DBG | trying to create private KVM network mk-addons-600097 192.168.39.0/24...
	I0229 01:12:01.006111  324746 main.go:141] libmachine: (addons-600097) DBG | private KVM network mk-addons-600097 192.168.39.0/24 created
	I0229 01:12:01.006171  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:01.006059  324768 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:12:01.006197  324746 main.go:141] libmachine: (addons-600097) Setting up store path in /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097 ...
	I0229 01:12:01.006254  324746 main.go:141] libmachine: (addons-600097) Building disk image from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 01:12:01.006293  324746 main.go:141] libmachine: (addons-600097) Downloading /home/jenkins/minikube-integration/18063-316644/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 01:12:01.266326  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:01.266157  324768 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa...
	I0229 01:12:01.418130  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:01.417973  324768 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/addons-600097.rawdisk...
	I0229 01:12:01.418180  324746 main.go:141] libmachine: (addons-600097) DBG | Writing magic tar header
	I0229 01:12:01.418192  324746 main.go:141] libmachine: (addons-600097) DBG | Writing SSH key tar header
	I0229 01:12:01.418199  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:01.418134  324768 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097 ...
	I0229 01:12:01.418219  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097
	I0229 01:12:01.418306  324746 main.go:141] libmachine: (addons-600097) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097 (perms=drwx------)
	I0229 01:12:01.418330  324746 main.go:141] libmachine: (addons-600097) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines (perms=drwxr-xr-x)
	I0229 01:12:01.418341  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines
	I0229 01:12:01.418357  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:12:01.418367  324746 main.go:141] libmachine: (addons-600097) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube (perms=drwxr-xr-x)
	I0229 01:12:01.418373  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644
	I0229 01:12:01.418384  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 01:12:01.418392  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home/jenkins
	I0229 01:12:01.418408  324746 main.go:141] libmachine: (addons-600097) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644 (perms=drwxrwxr-x)
	I0229 01:12:01.418418  324746 main.go:141] libmachine: (addons-600097) DBG | Checking permissions on dir: /home
	I0229 01:12:01.418429  324746 main.go:141] libmachine: (addons-600097) DBG | Skipping /home - not owner
	I0229 01:12:01.418438  324746 main.go:141] libmachine: (addons-600097) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 01:12:01.418443  324746 main.go:141] libmachine: (addons-600097) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 01:12:01.418451  324746 main.go:141] libmachine: (addons-600097) Creating domain...
	I0229 01:12:01.419420  324746 main.go:141] libmachine: (addons-600097) define libvirt domain using xml: 
	I0229 01:12:01.419442  324746 main.go:141] libmachine: (addons-600097) <domain type='kvm'>
	I0229 01:12:01.419449  324746 main.go:141] libmachine: (addons-600097)   <name>addons-600097</name>
	I0229 01:12:01.419456  324746 main.go:141] libmachine: (addons-600097)   <memory unit='MiB'>4000</memory>
	I0229 01:12:01.419466  324746 main.go:141] libmachine: (addons-600097)   <vcpu>2</vcpu>
	I0229 01:12:01.419473  324746 main.go:141] libmachine: (addons-600097)   <features>
	I0229 01:12:01.419482  324746 main.go:141] libmachine: (addons-600097)     <acpi/>
	I0229 01:12:01.419495  324746 main.go:141] libmachine: (addons-600097)     <apic/>
	I0229 01:12:01.419500  324746 main.go:141] libmachine: (addons-600097)     <pae/>
	I0229 01:12:01.419504  324746 main.go:141] libmachine: (addons-600097)     
	I0229 01:12:01.419509  324746 main.go:141] libmachine: (addons-600097)   </features>
	I0229 01:12:01.419513  324746 main.go:141] libmachine: (addons-600097)   <cpu mode='host-passthrough'>
	I0229 01:12:01.419520  324746 main.go:141] libmachine: (addons-600097)   
	I0229 01:12:01.419524  324746 main.go:141] libmachine: (addons-600097)   </cpu>
	I0229 01:12:01.419542  324746 main.go:141] libmachine: (addons-600097)   <os>
	I0229 01:12:01.419578  324746 main.go:141] libmachine: (addons-600097)     <type>hvm</type>
	I0229 01:12:01.419591  324746 main.go:141] libmachine: (addons-600097)     <boot dev='cdrom'/>
	I0229 01:12:01.419598  324746 main.go:141] libmachine: (addons-600097)     <boot dev='hd'/>
	I0229 01:12:01.419608  324746 main.go:141] libmachine: (addons-600097)     <bootmenu enable='no'/>
	I0229 01:12:01.419618  324746 main.go:141] libmachine: (addons-600097)   </os>
	I0229 01:12:01.419627  324746 main.go:141] libmachine: (addons-600097)   <devices>
	I0229 01:12:01.419638  324746 main.go:141] libmachine: (addons-600097)     <disk type='file' device='cdrom'>
	I0229 01:12:01.419667  324746 main.go:141] libmachine: (addons-600097)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/boot2docker.iso'/>
	I0229 01:12:01.419704  324746 main.go:141] libmachine: (addons-600097)       <target dev='hdc' bus='scsi'/>
	I0229 01:12:01.419714  324746 main.go:141] libmachine: (addons-600097)       <readonly/>
	I0229 01:12:01.419729  324746 main.go:141] libmachine: (addons-600097)     </disk>
	I0229 01:12:01.419747  324746 main.go:141] libmachine: (addons-600097)     <disk type='file' device='disk'>
	I0229 01:12:01.419762  324746 main.go:141] libmachine: (addons-600097)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 01:12:01.419776  324746 main.go:141] libmachine: (addons-600097)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/addons-600097.rawdisk'/>
	I0229 01:12:01.419783  324746 main.go:141] libmachine: (addons-600097)       <target dev='hda' bus='virtio'/>
	I0229 01:12:01.419789  324746 main.go:141] libmachine: (addons-600097)     </disk>
	I0229 01:12:01.419798  324746 main.go:141] libmachine: (addons-600097)     <interface type='network'>
	I0229 01:12:01.419812  324746 main.go:141] libmachine: (addons-600097)       <source network='mk-addons-600097'/>
	I0229 01:12:01.419827  324746 main.go:141] libmachine: (addons-600097)       <model type='virtio'/>
	I0229 01:12:01.419836  324746 main.go:141] libmachine: (addons-600097)     </interface>
	I0229 01:12:01.419846  324746 main.go:141] libmachine: (addons-600097)     <interface type='network'>
	I0229 01:12:01.419856  324746 main.go:141] libmachine: (addons-600097)       <source network='default'/>
	I0229 01:12:01.419863  324746 main.go:141] libmachine: (addons-600097)       <model type='virtio'/>
	I0229 01:12:01.419878  324746 main.go:141] libmachine: (addons-600097)     </interface>
	I0229 01:12:01.419887  324746 main.go:141] libmachine: (addons-600097)     <serial type='pty'>
	I0229 01:12:01.419895  324746 main.go:141] libmachine: (addons-600097)       <target port='0'/>
	I0229 01:12:01.419908  324746 main.go:141] libmachine: (addons-600097)     </serial>
	I0229 01:12:01.419920  324746 main.go:141] libmachine: (addons-600097)     <console type='pty'>
	I0229 01:12:01.419929  324746 main.go:141] libmachine: (addons-600097)       <target type='serial' port='0'/>
	I0229 01:12:01.419940  324746 main.go:141] libmachine: (addons-600097)     </console>
	I0229 01:12:01.419948  324746 main.go:141] libmachine: (addons-600097)     <rng model='virtio'>
	I0229 01:12:01.419958  324746 main.go:141] libmachine: (addons-600097)       <backend model='random'>/dev/random</backend>
	I0229 01:12:01.419968  324746 main.go:141] libmachine: (addons-600097)     </rng>
	I0229 01:12:01.419984  324746 main.go:141] libmachine: (addons-600097)     
	I0229 01:12:01.420003  324746 main.go:141] libmachine: (addons-600097)     
	I0229 01:12:01.420012  324746 main.go:141] libmachine: (addons-600097)   </devices>
	I0229 01:12:01.420020  324746 main.go:141] libmachine: (addons-600097) </domain>
	I0229 01:12:01.420029  324746 main.go:141] libmachine: (addons-600097) 
	I0229 01:12:01.426127  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:0d:58:f8 in network default
	I0229 01:12:01.426959  324746 main.go:141] libmachine: (addons-600097) Ensuring networks are active...
	I0229 01:12:01.427006  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:01.427627  324746 main.go:141] libmachine: (addons-600097) Ensuring network default is active
	I0229 01:12:01.427955  324746 main.go:141] libmachine: (addons-600097) Ensuring network mk-addons-600097 is active
	I0229 01:12:01.428372  324746 main.go:141] libmachine: (addons-600097) Getting domain xml...
	I0229 01:12:01.428926  324746 main.go:141] libmachine: (addons-600097) Creating domain...
	I0229 01:12:02.777146  324746 main.go:141] libmachine: (addons-600097) Waiting to get IP...
	I0229 01:12:02.777835  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:02.778207  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:02.778266  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:02.778192  324768 retry.go:31] will retry after 258.133761ms: waiting for machine to come up
	I0229 01:12:03.037697  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:03.038126  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:03.038150  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:03.038076  324768 retry.go:31] will retry after 250.035533ms: waiting for machine to come up
	I0229 01:12:03.289431  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:03.289847  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:03.289877  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:03.289783  324768 retry.go:31] will retry after 440.875147ms: waiting for machine to come up
	I0229 01:12:03.732488  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:03.732880  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:03.732905  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:03.732842  324768 retry.go:31] will retry after 396.006304ms: waiting for machine to come up
	I0229 01:12:04.130600  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:04.131027  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:04.131054  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:04.130958  324768 retry.go:31] will retry after 599.846838ms: waiting for machine to come up
	I0229 01:12:04.732718  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:04.733175  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:04.733208  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:04.733111  324768 retry.go:31] will retry after 664.87235ms: waiting for machine to come up
	I0229 01:12:05.399846  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:05.400203  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:05.400226  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:05.400157  324768 retry.go:31] will retry after 876.719492ms: waiting for machine to come up
	I0229 01:12:06.278871  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:06.279255  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:06.279284  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:06.279205  324768 retry.go:31] will retry after 1.44982438s: waiting for machine to come up
	I0229 01:12:07.730844  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:07.731281  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:07.731332  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:07.731216  324768 retry.go:31] will retry after 1.582055103s: waiting for machine to come up
	I0229 01:12:09.315925  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:09.316413  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:09.316443  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:09.316336  324768 retry.go:31] will retry after 1.423644428s: waiting for machine to come up
	I0229 01:12:10.741772  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:10.742279  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:10.742322  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:10.742231  324768 retry.go:31] will retry after 2.206084184s: waiting for machine to come up
	I0229 01:12:12.951377  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:12.951792  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:12.951828  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:12.951745  324768 retry.go:31] will retry after 3.273018546s: waiting for machine to come up
	I0229 01:12:16.226625  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:16.227093  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:16.227118  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:16.227045  324768 retry.go:31] will retry after 3.33783935s: waiting for machine to come up
	I0229 01:12:19.567338  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:19.567773  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find current IP address of domain addons-600097 in network mk-addons-600097
	I0229 01:12:19.567799  324746 main.go:141] libmachine: (addons-600097) DBG | I0229 01:12:19.567733  324768 retry.go:31] will retry after 5.653686995s: waiting for machine to come up
	I0229 01:12:25.226351  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:25.226816  324746 main.go:141] libmachine: (addons-600097) Found IP for machine: 192.168.39.181
	I0229 01:12:25.226842  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has current primary IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:25.226848  324746 main.go:141] libmachine: (addons-600097) Reserving static IP address...
	I0229 01:12:25.227142  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find host DHCP lease matching {name: "addons-600097", mac: "52:54:00:2a:8d:58", ip: "192.168.39.181"} in network mk-addons-600097
	I0229 01:12:25.296958  324746 main.go:141] libmachine: (addons-600097) DBG | Getting to WaitForSSH function...
	I0229 01:12:25.296996  324746 main.go:141] libmachine: (addons-600097) Reserved static IP address: 192.168.39.181
	I0229 01:12:25.297011  324746 main.go:141] libmachine: (addons-600097) Waiting for SSH to be available...
	I0229 01:12:25.299611  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:25.299925  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097
	I0229 01:12:25.299951  324746 main.go:141] libmachine: (addons-600097) DBG | unable to find defined IP address of network mk-addons-600097 interface with MAC address 52:54:00:2a:8d:58
	I0229 01:12:25.300094  324746 main.go:141] libmachine: (addons-600097) DBG | Using SSH client type: external
	I0229 01:12:25.300121  324746 main.go:141] libmachine: (addons-600097) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa (-rw-------)
	I0229 01:12:25.300197  324746 main.go:141] libmachine: (addons-600097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 01:12:25.300225  324746 main.go:141] libmachine: (addons-600097) DBG | About to run SSH command:
	I0229 01:12:25.300243  324746 main.go:141] libmachine: (addons-600097) DBG | exit 0
	I0229 01:12:25.303873  324746 main.go:141] libmachine: (addons-600097) DBG | SSH cmd err, output: exit status 255: 
	I0229 01:12:25.303894  324746 main.go:141] libmachine: (addons-600097) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0229 01:12:25.303902  324746 main.go:141] libmachine: (addons-600097) DBG | command : exit 0
	I0229 01:12:25.303915  324746 main.go:141] libmachine: (addons-600097) DBG | err     : exit status 255
	I0229 01:12:25.303922  324746 main.go:141] libmachine: (addons-600097) DBG | output  : 
	I0229 01:12:28.305749  324746 main.go:141] libmachine: (addons-600097) DBG | Getting to WaitForSSH function...
	I0229 01:12:28.307954  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.308294  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.308339  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.308369  324746 main.go:141] libmachine: (addons-600097) DBG | Using SSH client type: external
	I0229 01:12:28.308377  324746 main.go:141] libmachine: (addons-600097) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa (-rw-------)
	I0229 01:12:28.308442  324746 main.go:141] libmachine: (addons-600097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 01:12:28.308474  324746 main.go:141] libmachine: (addons-600097) DBG | About to run SSH command:
	I0229 01:12:28.308483  324746 main.go:141] libmachine: (addons-600097) DBG | exit 0
	I0229 01:12:28.434172  324746 main.go:141] libmachine: (addons-600097) DBG | SSH cmd err, output: <nil>: 
	I0229 01:12:28.434553  324746 main.go:141] libmachine: (addons-600097) KVM machine creation complete!
	I0229 01:12:28.434813  324746 main.go:141] libmachine: (addons-600097) Calling .GetConfigRaw
	I0229 01:12:28.435345  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:28.435539  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:28.435726  324746 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 01:12:28.435743  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:12:28.437108  324746 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 01:12:28.437123  324746 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 01:12:28.437129  324746 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 01:12:28.437157  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:28.439537  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.439995  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.440027  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.440170  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:28.440345  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.440511  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.440644  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:28.440794  324746 main.go:141] libmachine: Using SSH client type: native
	I0229 01:12:28.441024  324746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I0229 01:12:28.441039  324746 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 01:12:28.554036  324746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:12:28.554072  324746 main.go:141] libmachine: Detecting the provisioner...
	I0229 01:12:28.554080  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:28.557070  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.557374  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.557436  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.557624  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:28.557859  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.558055  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.558213  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:28.558388  324746 main.go:141] libmachine: Using SSH client type: native
	I0229 01:12:28.558558  324746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I0229 01:12:28.558569  324746 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 01:12:28.671881  324746 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 01:12:28.671998  324746 main.go:141] libmachine: found compatible host: buildroot
	I0229 01:12:28.672014  324746 main.go:141] libmachine: Provisioning with buildroot...
	I0229 01:12:28.672028  324746 main.go:141] libmachine: (addons-600097) Calling .GetMachineName
	I0229 01:12:28.672303  324746 buildroot.go:166] provisioning hostname "addons-600097"
	I0229 01:12:28.672340  324746 main.go:141] libmachine: (addons-600097) Calling .GetMachineName
	I0229 01:12:28.672553  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:28.675289  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.675693  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.675715  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.675870  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:28.676045  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.676183  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.676291  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:28.676454  324746 main.go:141] libmachine: Using SSH client type: native
	I0229 01:12:28.676623  324746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I0229 01:12:28.676636  324746 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-600097 && echo "addons-600097" | sudo tee /etc/hostname
	I0229 01:12:28.808885  324746 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-600097
	
	I0229 01:12:28.808928  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:28.811592  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.811911  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.811939  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.812164  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:28.812403  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.812576  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:28.812754  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:28.812926  324746 main.go:141] libmachine: Using SSH client type: native
	I0229 01:12:28.813108  324746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I0229 01:12:28.813127  324746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-600097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-600097/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-600097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:12:28.932636  324746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:12:28.932667  324746 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 01:12:28.932728  324746 buildroot.go:174] setting up certificates
	I0229 01:12:28.932749  324746 provision.go:83] configureAuth start
	I0229 01:12:28.932764  324746 main.go:141] libmachine: (addons-600097) Calling .GetMachineName
	I0229 01:12:28.933099  324746 main.go:141] libmachine: (addons-600097) Calling .GetIP
	I0229 01:12:28.935796  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.936129  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.936153  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.936298  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:28.938710  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.939051  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:28.939088  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:28.939260  324746 provision.go:138] copyHostCerts
	I0229 01:12:28.939339  324746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 01:12:28.939489  324746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 01:12:28.939590  324746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 01:12:28.939662  324746 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.addons-600097 san=[192.168.39.181 192.168.39.181 localhost 127.0.0.1 minikube addons-600097]
	I0229 01:12:29.007124  324746 provision.go:172] copyRemoteCerts
	I0229 01:12:29.007202  324746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:12:29.007238  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:29.009932  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.010282  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.010313  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.010495  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:29.010711  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.010857  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:29.010991  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:12:29.098268  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 01:12:29.124562  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 01:12:29.149870  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 01:12:29.175067  324746 provision.go:86] duration metric: configureAuth took 242.303028ms
	I0229 01:12:29.175094  324746 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:12:29.175253  324746 config.go:182] Loaded profile config "addons-600097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:12:29.175330  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:29.177923  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.178279  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.178312  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.178553  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:29.178739  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.178921  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.179046  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:29.179199  324746 main.go:141] libmachine: Using SSH client type: native
	I0229 01:12:29.179403  324746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I0229 01:12:29.179425  324746 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 01:12:29.479481  324746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 01:12:29.479513  324746 main.go:141] libmachine: Checking connection to Docker...
	I0229 01:12:29.479522  324746 main.go:141] libmachine: (addons-600097) Calling .GetURL
	I0229 01:12:29.480791  324746 main.go:141] libmachine: (addons-600097) DBG | Using libvirt version 6000000
	I0229 01:12:29.482899  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.483235  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.483263  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.483414  324746 main.go:141] libmachine: Docker is up and running!
	I0229 01:12:29.483425  324746 main.go:141] libmachine: Reticulating splines...
	I0229 01:12:29.483434  324746 client.go:171] LocalClient.Create took 28.83841574s
	I0229 01:12:29.483468  324746 start.go:167] duration metric: libmachine.API.Create for "addons-600097" took 28.838505881s
	I0229 01:12:29.483481  324746 start.go:300] post-start starting for "addons-600097" (driver="kvm2")
	I0229 01:12:29.483498  324746 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:12:29.483521  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:29.483760  324746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:12:29.483784  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:29.485744  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.486030  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.486074  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.486180  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:29.486380  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.486517  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:29.486667  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:12:29.573820  324746 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:12:29.578791  324746 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:12:29.578822  324746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 01:12:29.578926  324746 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 01:12:29.578951  324746 start.go:303] post-start completed in 95.464729ms
	I0229 01:12:29.578990  324746 main.go:141] libmachine: (addons-600097) Calling .GetConfigRaw
	I0229 01:12:29.579637  324746 main.go:141] libmachine: (addons-600097) Calling .GetIP
	I0229 01:12:29.582100  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.582453  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.582487  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.582721  324746 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/config.json ...
	I0229 01:12:29.582937  324746 start.go:128] duration metric: createHost completed in 28.955884846s
	I0229 01:12:29.582968  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:29.585039  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.585329  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.585358  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.585479  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:29.585666  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.585801  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.585929  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:29.586080  324746 main.go:141] libmachine: Using SSH client type: native
	I0229 01:12:29.586267  324746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I0229 01:12:29.586287  324746 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 01:12:29.699567  324746 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709169149.676426879
	
	I0229 01:12:29.699599  324746 fix.go:206] guest clock: 1709169149.676426879
	I0229 01:12:29.699606  324746 fix.go:219] Guest: 2024-02-29 01:12:29.676426879 +0000 UTC Remote: 2024-02-29 01:12:29.582950342 +0000 UTC m=+29.067750154 (delta=93.476537ms)
	I0229 01:12:29.699627  324746 fix.go:190] guest clock delta is within tolerance: 93.476537ms
	I0229 01:12:29.699633  324746 start.go:83] releasing machines lock for "addons-600097", held for 29.072652544s
	I0229 01:12:29.699654  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:29.699960  324746 main.go:141] libmachine: (addons-600097) Calling .GetIP
	I0229 01:12:29.702461  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.702767  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.702800  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.702976  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:29.703520  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:29.703694  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:12:29.703800  324746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:12:29.703896  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:29.703962  324746 ssh_runner.go:195] Run: cat /version.json
	I0229 01:12:29.703989  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:12:29.706559  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.706829  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.706883  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.706907  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.707041  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:29.707224  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.707255  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:29.707278  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:29.707408  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:29.707430  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:12:29.707583  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:12:29.707590  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:12:29.707728  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:12:29.707862  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:12:29.787943  324746 ssh_runner.go:195] Run: systemctl --version
	I0229 01:12:29.812671  324746 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 01:12:29.978111  324746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 01:12:29.984986  324746 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:12:29.985042  324746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 01:12:30.001964  324746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 01:12:30.001989  324746 start.go:475] detecting cgroup driver to use...
	I0229 01:12:30.002047  324746 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:12:30.017789  324746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:12:30.031876  324746 docker.go:217] disabling cri-docker service (if available) ...
	I0229 01:12:30.031939  324746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 01:12:30.045833  324746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 01:12:30.059910  324746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 01:12:30.183967  324746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 01:12:30.355461  324746 docker.go:233] disabling docker service ...
	I0229 01:12:30.355546  324746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 01:12:30.371759  324746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 01:12:30.386024  324746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 01:12:30.515464  324746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 01:12:30.644572  324746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 01:12:30.660416  324746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:12:30.680766  324746 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 01:12:30.680833  324746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:12:30.692482  324746 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 01:12:30.692585  324746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:12:30.703731  324746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:12:30.715441  324746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:12:30.727389  324746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:12:30.739774  324746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:12:30.750320  324746 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 01:12:30.750390  324746 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 01:12:30.764582  324746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:12:30.775824  324746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:12:30.902471  324746 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 01:12:31.053891  324746 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 01:12:31.053977  324746 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 01:12:31.059293  324746 start.go:543] Will wait 60s for crictl version
	I0229 01:12:31.059381  324746 ssh_runner.go:195] Run: which crictl
	I0229 01:12:31.063795  324746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 01:12:31.100015  324746 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 01:12:31.100139  324746 ssh_runner.go:195] Run: crio --version
	I0229 01:12:31.131983  324746 ssh_runner.go:195] Run: crio --version
	I0229 01:12:31.165463  324746 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 01:12:31.167057  324746 main.go:141] libmachine: (addons-600097) Calling .GetIP
	I0229 01:12:31.169740  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:31.170097  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:12:31.170126  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:12:31.170349  324746 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 01:12:31.175052  324746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:12:31.188928  324746 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 01:12:31.188979  324746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 01:12:31.225759  324746 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 01:12:31.225858  324746 ssh_runner.go:195] Run: which lz4
	I0229 01:12:31.231184  324746 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 01:12:31.235894  324746 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 01:12:31.235951  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 01:12:32.960025  324746 crio.go:444] Took 1.728887 seconds to copy over tarball
	I0229 01:12:32.960122  324746 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 01:12:35.879974  324746 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.919813004s)
	I0229 01:12:35.880007  324746 crio.go:451] Took 2.919948 seconds to extract the tarball
	I0229 01:12:35.880018  324746 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 01:12:35.925983  324746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 01:12:35.983200  324746 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 01:12:35.983234  324746 cache_images.go:84] Images are preloaded, skipping loading
	I0229 01:12:35.983317  324746 ssh_runner.go:195] Run: crio config
	I0229 01:12:36.043493  324746 cni.go:84] Creating CNI manager for ""
	I0229 01:12:36.043525  324746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 01:12:36.043551  324746 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 01:12:36.043575  324746 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.181 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-600097 NodeName:addons-600097 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 01:12:36.043795  324746 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-600097"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:12:36.043889  324746 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-600097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-600097 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:12:36.043970  324746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 01:12:36.056298  324746 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:12:36.056370  324746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 01:12:36.068977  324746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0229 01:12:36.089716  324746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 01:12:36.110414  324746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0229 01:12:36.131463  324746 ssh_runner.go:195] Run: grep 192.168.39.181	control-plane.minikube.internal$ /etc/hosts
	I0229 01:12:36.136259  324746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:12:36.152295  324746 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097 for IP: 192.168.39.181
	I0229 01:12:36.152338  324746 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.152482  324746 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 01:12:36.276858  324746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt ...
	I0229 01:12:36.276894  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt: {Name:mk193ee721ad2abcc60b7c061dc7c62a3de798cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.277056  324746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key ...
	I0229 01:12:36.277067  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key: {Name:mk1521f75403bd7da4291280d460d1915bb5045b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.277138  324746 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 01:12:36.322712  324746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt ...
	I0229 01:12:36.322740  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt: {Name:mk3b4f192034ba0b786cd41aeb52fee609cb164d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.322893  324746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key ...
	I0229 01:12:36.322904  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key: {Name:mka1c6506c3df4f07511468c975fce6d6408c79e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.323005  324746 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.key
	I0229 01:12:36.323019  324746 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt with IP's: []
	I0229 01:12:36.407422  324746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt ...
	I0229 01:12:36.407456  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: {Name:mk19674df63e3f5d7d45057f34134d3f56e1ca82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.407614  324746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.key ...
	I0229 01:12:36.407627  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.key: {Name:mk79823e4c989cc5197f7db8a637f177801a3e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.407703  324746 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.key.8841b717
	I0229 01:12:36.407721  324746 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.crt.8841b717 with IP's: [192.168.39.181 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 01:12:36.557904  324746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.crt.8841b717 ...
	I0229 01:12:36.557944  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.crt.8841b717: {Name:mkb3f015b83c12c3372edcfb215034b00c91b960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.558103  324746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.key.8841b717 ...
	I0229 01:12:36.558115  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.key.8841b717: {Name:mkf467c11b14d1ad5ca1e8e193d5e9807f316b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.558184  324746 certs.go:337] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.crt.8841b717 -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.crt
	I0229 01:12:36.558306  324746 certs.go:341] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.key.8841b717 -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.key
	I0229 01:12:36.558361  324746 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.key
	I0229 01:12:36.558376  324746 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.crt with IP's: []
	I0229 01:12:36.786979  324746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.crt ...
	I0229 01:12:36.787016  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.crt: {Name:mk2d5de8954296b1b84fda3b82111363c26b2900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.787177  324746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.key ...
	I0229 01:12:36.787197  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.key: {Name:mk7cf24ca02f209f26008edf8707c38c220b4a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:12:36.787386  324746 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 01:12:36.787426  324746 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 01:12:36.787457  324746 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:12:36.787493  324746 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 01:12:36.788346  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 01:12:36.818848  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 01:12:36.845293  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 01:12:36.872773  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 01:12:36.899044  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:12:36.924984  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 01:12:36.953695  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:12:36.980083  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 01:12:37.006963  324746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:12:37.033915  324746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 01:12:37.053190  324746 ssh_runner.go:195] Run: openssl version
	I0229 01:12:37.059791  324746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:12:37.073018  324746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:12:37.078350  324746 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:12:37.078429  324746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:12:37.084748  324746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:12:37.097736  324746 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:12:37.102571  324746 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 01:12:37.102632  324746 kubeadm.go:404] StartCluster: {Name:addons-600097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-600097 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.181 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:12:37.102733  324746 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 01:12:37.102780  324746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 01:12:37.142467  324746 cri.go:89] found id: ""
	I0229 01:12:37.142573  324746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 01:12:37.154433  324746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:12:37.165816  324746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:12:37.177330  324746 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:12:37.177383  324746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 01:12:37.231806  324746 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 01:12:37.231950  324746 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:12:37.373865  324746 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:12:37.373985  324746 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:12:37.374113  324746 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:12:37.601778  324746 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:12:37.830056  324746 out.go:204]   - Generating certificates and keys ...
	I0229 01:12:37.830191  324746 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:12:37.830303  324746 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:12:37.830412  324746 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 01:12:38.109208  324746 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 01:12:38.349761  324746 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 01:12:38.580724  324746 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 01:12:38.831557  324746 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 01:12:38.831710  324746 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-600097 localhost] and IPs [192.168.39.181 127.0.0.1 ::1]
	I0229 01:12:38.945419  324746 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 01:12:38.945599  324746 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-600097 localhost] and IPs [192.168.39.181 127.0.0.1 ::1]
	I0229 01:12:39.092599  324746 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 01:12:39.164895  324746 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 01:12:39.316641  324746 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 01:12:39.316976  324746 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:12:39.433881  324746 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:12:39.628239  324746 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:12:40.008814  324746 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:12:40.183951  324746 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:12:40.184533  324746 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:12:40.186918  324746 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:12:40.188781  324746 out.go:204]   - Booting up control plane ...
	I0229 01:12:40.188904  324746 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:12:40.189846  324746 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:12:40.191257  324746 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:12:40.212699  324746 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:12:40.212840  324746 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:12:40.212907  324746 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 01:12:40.344832  324746 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:12:46.344485  324746 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.002991 seconds
	I0229 01:12:46.344650  324746 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 01:12:46.363156  324746 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 01:12:46.895676  324746 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 01:12:46.895905  324746 kubeadm.go:322] [mark-control-plane] Marking the node addons-600097 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 01:12:47.413021  324746 kubeadm.go:322] [bootstrap-token] Using token: i2768e.hjj2wzw3cu3l808f
	I0229 01:12:47.414777  324746 out.go:204]   - Configuring RBAC rules ...
	I0229 01:12:47.414944  324746 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 01:12:47.421765  324746 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 01:12:47.432939  324746 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 01:12:47.436971  324746 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 01:12:47.442068  324746 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 01:12:47.448488  324746 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 01:12:47.464555  324746 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 01:12:47.722639  324746 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 01:12:47.841469  324746 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 01:12:47.841513  324746 kubeadm.go:322] 
	I0229 01:12:47.841597  324746 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 01:12:47.841610  324746 kubeadm.go:322] 
	I0229 01:12:47.841715  324746 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 01:12:47.841731  324746 kubeadm.go:322] 
	I0229 01:12:47.841770  324746 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 01:12:47.841866  324746 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 01:12:47.841946  324746 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 01:12:47.841963  324746 kubeadm.go:322] 
	I0229 01:12:47.842042  324746 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 01:12:47.842051  324746 kubeadm.go:322] 
	I0229 01:12:47.842149  324746 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 01:12:47.842168  324746 kubeadm.go:322] 
	I0229 01:12:47.842273  324746 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 01:12:47.842385  324746 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 01:12:47.842482  324746 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 01:12:47.842497  324746 kubeadm.go:322] 
	I0229 01:12:47.842634  324746 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 01:12:47.842739  324746 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 01:12:47.842749  324746 kubeadm.go:322] 
	I0229 01:12:47.842849  324746 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token i2768e.hjj2wzw3cu3l808f \
	I0229 01:12:47.842973  324746 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 \
	I0229 01:12:47.843009  324746 kubeadm.go:322] 	--control-plane 
	I0229 01:12:47.843018  324746 kubeadm.go:322] 
	I0229 01:12:47.843118  324746 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 01:12:47.843130  324746 kubeadm.go:322] 
	I0229 01:12:47.843242  324746 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token i2768e.hjj2wzw3cu3l808f \
	I0229 01:12:47.843364  324746 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 
	I0229 01:12:47.843528  324746 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:12:47.843559  324746 cni.go:84] Creating CNI manager for ""
	I0229 01:12:47.843569  324746 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 01:12:47.845124  324746 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 01:12:47.846516  324746 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 01:12:47.882685  324746 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 01:12:47.922511  324746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 01:12:47.922601  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:47.922661  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=addons-600097 minikube.k8s.io/updated_at=2024_02_29T01_12_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:48.033931  324746 ops.go:34] apiserver oom_adj: -16
	I0229 01:12:48.146850  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:48.647492  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:49.147438  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:49.647522  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:50.147456  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:50.646841  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:51.146903  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:51.646999  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:52.147147  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:52.646959  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:53.146997  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:53.647526  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:54.147011  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:54.646984  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:55.147031  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:55.647467  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:56.147548  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:56.647582  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:57.147136  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:57.647235  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:58.147805  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:58.647063  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:59.147396  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:12:59.647124  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:13:00.147511  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:13:00.647154  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:13:01.147216  324746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:13:01.377718  324746 kubeadm.go:1088] duration metric: took 13.455183803s to wait for elevateKubeSystemPrivileges.
	I0229 01:13:01.377767  324746 kubeadm.go:406] StartCluster complete in 24.275141265s
	I0229 01:13:01.377805  324746 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:13:01.377961  324746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:13:01.378707  324746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:13:01.379000  324746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 01:13:01.379168  324746 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0229 01:13:01.379260  324746 addons.go:69] Setting yakd=true in profile "addons-600097"
	I0229 01:13:01.379264  324746 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-600097"
	I0229 01:13:01.379282  324746 addons.go:234] Setting addon yakd=true in "addons-600097"
	I0229 01:13:01.379340  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.379343  324746 config.go:182] Loaded profile config "addons-600097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:13:01.379361  324746 addons.go:69] Setting cloud-spanner=true in profile "addons-600097"
	I0229 01:13:01.379377  324746 addons.go:234] Setting addon cloud-spanner=true in "addons-600097"
	I0229 01:13:01.379354  324746 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-600097"
	I0229 01:13:01.379429  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.379452  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.379775  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.379789  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.379801  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.379808  324746 addons.go:69] Setting inspektor-gadget=true in profile "addons-600097"
	I0229 01:13:01.379824  324746 addons.go:234] Setting addon inspektor-gadget=true in "addons-600097"
	I0229 01:13:01.379825  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.379853  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.379867  324746 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-600097"
	I0229 01:13:01.379883  324746 addons.go:69] Setting default-storageclass=true in profile "addons-600097"
	I0229 01:13:01.379896  324746 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-600097"
	I0229 01:13:01.379902  324746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-600097"
	I0229 01:13:01.379873  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.380186  324746 addons.go:69] Setting helm-tiller=true in profile "addons-600097"
	I0229 01:13:01.380209  324746 addons.go:234] Setting addon helm-tiller=true in "addons-600097"
	I0229 01:13:01.380255  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.380259  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.380282  324746 addons.go:69] Setting registry=true in profile "addons-600097"
	I0229 01:13:01.380293  324746 addons.go:234] Setting addon registry=true in "addons-600097"
	I0229 01:13:01.380332  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.380342  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.380377  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.380434  324746 addons.go:69] Setting ingress=true in profile "addons-600097"
	I0229 01:13:01.380448  324746 addons.go:234] Setting addon ingress=true in "addons-600097"
	I0229 01:13:01.380616  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.380639  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.380674  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.380696  324746 addons.go:69] Setting gcp-auth=true in profile "addons-600097"
	I0229 01:13:01.380704  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.380712  324746 mustload.go:65] Loading cluster: addons-600097
	I0229 01:13:01.380754  324746 addons.go:69] Setting storage-provisioner=true in profile "addons-600097"
	I0229 01:13:01.380767  324746 addons.go:234] Setting addon storage-provisioner=true in "addons-600097"
	I0229 01:13:01.380801  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.380890  324746 config.go:182] Loaded profile config "addons-600097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:13:01.381092  324746 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-600097"
	I0229 01:13:01.381123  324746 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-600097"
	I0229 01:13:01.381148  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.381176  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.381203  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.381221  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.381221  324746 addons.go:69] Setting ingress-dns=true in profile "addons-600097"
	I0229 01:13:01.381235  324746 addons.go:234] Setting addon ingress-dns=true in "addons-600097"
	I0229 01:13:01.381252  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.379831  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.381302  324746 addons.go:69] Setting volumesnapshots=true in profile "addons-600097"
	I0229 01:13:01.381313  324746 addons.go:234] Setting addon volumesnapshots=true in "addons-600097"
	I0229 01:13:01.381531  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.381562  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.379856  324746 addons.go:69] Setting metrics-server=true in profile "addons-600097"
	I0229 01:13:01.381628  324746 addons.go:234] Setting addon metrics-server=true in "addons-600097"
	I0229 01:13:01.381673  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.381790  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.381820  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.382235  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.382582  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.382609  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.382923  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.383285  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.383304  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.401142  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43055
	I0229 01:13:01.401151  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0229 01:13:01.401805  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.401919  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.402516  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.402543  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.402686  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.402708  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.403034  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.403560  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.403604  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.403828  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.403857  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43571
	I0229 01:13:01.404656  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.404707  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.405142  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.405761  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.405790  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.405866  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38173
	I0229 01:13:01.406142  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.406700  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.406737  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.406774  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.406799  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.406840  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.406875  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.407757  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.407789  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.417323  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37033
	I0229 01:13:01.418458  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36963
	I0229 01:13:01.418683  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
	I0229 01:13:01.418687  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0229 01:13:01.419273  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.419445  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.419992  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.420012  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.420161  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.420171  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.420238  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.420558  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.420640  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.420882  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.420945  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.420991  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.422129  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.422156  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.422468  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.422487  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.422607  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.422616  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.422982  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.423039  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.423590  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.423624  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.423795  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.425526  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.425547  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.425609  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.426330  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.426365  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.430454  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.431063  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.431089  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.431528  324746 addons.go:234] Setting addon default-storageclass=true in "addons-600097"
	I0229 01:13:01.431581  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.431996  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.432048  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.440558  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37273
	I0229 01:13:01.440716  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44001
	I0229 01:13:01.441238  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.441844  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.441865  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.442319  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.442765  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.442940  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.442993  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.446682  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.446703  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.446918  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0229 01:13:01.447471  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.447559  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.448061  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.448078  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.448198  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0229 01:13:01.448380  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.448663  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.448826  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.449331  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.449350  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.449713  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.450309  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.450350  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.450559  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.453836  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41857
	I0229 01:13:01.453975  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41035
	I0229 01:13:01.454473  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.454974  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.454992  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.455478  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.456141  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.456181  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.456720  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
	I0229 01:13:01.456906  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.457095  324746 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-600097"
	I0229 01:13:01.457144  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:01.457495  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.457512  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.457560  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.457601  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.457931  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.458558  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.458584  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.458596  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.460489  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0229 01:13:01.459078  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.461230  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0229 01:13:01.463306  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0229 01:13:01.462157  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.462565  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.464440  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.465967  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0229 01:13:01.464885  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34939
	I0229 01:13:01.465276  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.465683  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.468251  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0229 01:13:01.467239  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.467473  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.467513  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.470504  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0229 01:13:01.469977  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.470084  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.470596  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.470756  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.471549  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.471801  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0229 01:13:01.472490  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45575
	I0229 01:13:01.472520  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.474312  324746 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0229 01:13:01.473128  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0229 01:13:01.473375  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.473911  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.474199  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.475662  324746 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0229 01:13:01.475678  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0229 01:13:01.475698  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.477375  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0229 01:13:01.476014  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.479589  324746 out.go:177]   - Using image docker.io/registry:2.8.3
	I0229 01:13:01.478456  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0229 01:13:01.478479  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.478660  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.479295  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.481881  324746 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0229 01:13:01.480754  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0229 01:13:01.480800  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.480900  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.481114  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.483218  324746 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0229 01:13:01.483243  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0229 01:13:01.483262  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.483296  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.483314  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.483338  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.483634  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37325
	I0229 01:13:01.484054  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.484243  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45727
	I0229 01:13:01.484263  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.484389  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.486037  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44253
	I0229 01:13:01.486167  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46619
	I0229 01:13:01.486663  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.486985  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.487168  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.487181  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.487288  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.487295  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.487305  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.488710  324746 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0229 01:13:01.487643  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.487701  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.487903  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.487930  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.488145  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.488380  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.488521  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.489643  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.489885  324746 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0229 01:13:01.489896  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.489904  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0229 01:13:01.489919  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.489921  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.489931  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.490003  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.490024  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.490191  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.490202  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.490258  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.490364  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.490417  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.490597  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.490669  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.490703  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.490922  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.490967  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.491312  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.492392  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.492416  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.492920  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.494781  324746 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0229 01:13:01.496022  324746 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0229 01:13:01.493951  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.494283  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.494995  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.496130  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.496155  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.495720  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I0229 01:13:01.495761  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.496046  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0229 01:13:01.496308  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.496330  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.496648  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.498359  324746 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0229 01:13:01.499487  324746 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0229 01:13:01.499504  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0229 01:13:01.498641  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.499522  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.499552  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.497905  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.498018  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0229 01:13:01.497495  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.499814  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.499052  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.499410  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0229 01:13:01.499880  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.499913  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.500212  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.500231  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.500665  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.500690  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.500978  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.500996  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.501128  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.501143  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.501204  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.501467  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.501482  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.501602  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.501622  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.501882  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.501937  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.502077  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.502127  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.502161  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.504086  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.505888  324746 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0229 01:13:01.504484  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.505435  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.506028  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.507225  324746 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 01:13:01.507238  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 01:13:01.507256  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.507342  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.507365  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.507561  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.509002  324746 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0229 01:13:01.507847  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.510393  324746 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 01:13:01.510405  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0229 01:13:01.510422  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.511112  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40547
	I0229 01:13:01.511261  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I0229 01:13:01.511360  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.511637  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.511896  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.512428  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.512629  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.512643  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.512832  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.513782  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.513573  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.513809  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.514280  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.514376  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.514497  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.515056  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:01.515095  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:01.515552  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41231
	I0229 01:13:01.515675  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.515771  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.515960  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.516258  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.516336  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.516352  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.516498  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.516523  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44807
	I0229 01:13:01.516558  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.516823  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.517102  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.517194  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.517354  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.517501  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.517675  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.517689  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.517796  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.517812  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.518148  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.518165  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.518380  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.518435  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.520031  324746 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0229 01:13:01.518814  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.519693  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.521346  324746 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0229 01:13:01.521366  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0229 01:13:01.521387  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.523255  324746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:13:01.522116  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.524568  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.525783  324746 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.4
	I0229 01:13:01.524742  324746 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 01:13:01.524918  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.525118  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.526971  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 01:13:01.527004  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.527004  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.527093  324746 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0229 01:13:01.527109  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0229 01:13:01.527120  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.527310  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.527485  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.528018  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.530615  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.530782  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.531092  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.531113  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.531216  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.531239  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.531440  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.531503  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.531665  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.531714  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.531832  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.531846  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42839
	I0229 01:13:01.531870  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.531980  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.532041  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.532776  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.533314  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.533338  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.533803  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.534000  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.535657  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.535915  324746 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 01:13:01.535935  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 01:13:01.535953  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.536551  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0229 01:13:01.537066  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.537167  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37311
	I0229 01:13:01.537846  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.537869  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.537888  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:01.538296  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.538503  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.538511  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:01.538526  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:01.538981  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:01.539189  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:01.540347  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.542144  324746 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0229 01:13:01.540821  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.541002  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:01.541424  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.544604  324746 out.go:177]   - Using image docker.io/busybox:stable
	I0229 01:13:01.543445  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.543492  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.547256  324746 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 01:13:01.545823  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.545943  324746 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0229 01:13:01.546124  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.548450  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0229 01:13:01.548466  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.549840  324746 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 01:13:01.548587  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.551008  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.552501  324746 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.6
	I0229 01:13:01.551392  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.552531  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.551553  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.553927  324746 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 01:13:01.553947  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0229 01:13:01.553965  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:01.554018  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.554157  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.554313  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.556851  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.557253  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:01.557303  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:01.557438  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:01.557627  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:01.557821  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:01.557975  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:01.859438  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 01:13:01.888146  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0229 01:13:01.889739  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0229 01:13:01.891123  324746 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-600097" context rescaled to 1 replicas
	I0229 01:13:01.891157  324746 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.181 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 01:13:01.892969  324746 out.go:177] * Verifying Kubernetes components...
	I0229 01:13:01.894268  324746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:13:01.936643  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 01:13:02.067493  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0229 01:13:02.067520  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0229 01:13:02.077550  324746 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0229 01:13:02.077572  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0229 01:13:02.120391  324746 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0229 01:13:02.120418  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0229 01:13:02.133074  324746 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0229 01:13:02.133096  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0229 01:13:02.177553  324746 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0229 01:13:02.177578  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0229 01:13:02.208140  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 01:13:02.209571  324746 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0229 01:13:02.209592  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0229 01:13:02.252152  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 01:13:02.256537  324746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 01:13:02.256561  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0229 01:13:02.292311  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0229 01:13:02.383965  324746 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.00492685s)
	I0229 01:13:02.384121  324746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 01:13:02.390529  324746 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0229 01:13:02.390550  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0229 01:13:02.412685  324746 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0229 01:13:02.412708  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0229 01:13:02.416709  324746 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0229 01:13:02.416725  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0229 01:13:02.425279  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0229 01:13:02.425308  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0229 01:13:02.437826  324746 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0229 01:13:02.437852  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0229 01:13:02.559211  324746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 01:13:02.559248  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 01:13:02.562856  324746 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0229 01:13:02.562874  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0229 01:13:02.720091  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0229 01:13:02.741145  324746 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0229 01:13:02.741177  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0229 01:13:02.747605  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0229 01:13:02.757979  324746 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0229 01:13:02.758001  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0229 01:13:02.789632  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0229 01:13:02.789658  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0229 01:13:02.881663  324746 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0229 01:13:02.881691  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0229 01:13:02.977472  324746 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 01:13:02.977506  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 01:13:03.011896  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0229 01:13:03.011924  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0229 01:13:03.147619  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0229 01:13:03.147651  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0229 01:13:03.160802  324746 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0229 01:13:03.160830  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0229 01:13:03.191987  324746 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0229 01:13:03.192021  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0229 01:13:03.273364  324746 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0229 01:13:03.273387  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0229 01:13:03.285187  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 01:13:03.389404  324746 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 01:13:03.389430  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0229 01:13:03.413139  324746 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0229 01:13:03.413162  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0229 01:13:03.442535  324746 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0229 01:13:03.442564  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0229 01:13:03.517522  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0229 01:13:03.615031  324746 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0229 01:13:03.615061  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0229 01:13:03.622107  324746 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0229 01:13:03.622130  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0229 01:13:03.628091  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 01:13:03.747989  324746 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0229 01:13:03.748030  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0229 01:13:03.773706  324746 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0229 01:13:03.773729  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0229 01:13:03.885650  324746 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0229 01:13:03.885674  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0229 01:13:03.926784  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0229 01:13:03.994900  324746 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0229 01:13:03.994926  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0229 01:13:04.135899  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0229 01:13:06.029590  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.1701135s)
	I0229 01:13:06.029663  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:06.029676  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:06.030051  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:06.030074  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:06.030088  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:06.030096  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:06.030101  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:06.030389  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:06.030445  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:06.030413  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:06.037671  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:06.037691  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:06.038038  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:06.038075  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:06.038092  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:07.166730  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.278516924s)
	I0229 01:13:07.166807  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:07.166826  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:07.167263  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:07.167284  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:07.167294  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:07.167303  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:07.167304  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:07.167543  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:07.167558  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:08.084574  324746 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0229 01:13:08.084618  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:08.088035  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:08.088454  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:08.088482  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:08.088650  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:08.088862  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:08.089043  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:08.089203  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:08.788248  324746 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0229 01:13:08.935184  324746 addons.go:234] Setting addon gcp-auth=true in "addons-600097"
	I0229 01:13:08.935257  324746 host.go:66] Checking if "addons-600097" exists ...
	I0229 01:13:08.935613  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:08.935651  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:08.951587  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
	I0229 01:13:08.952061  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:08.952621  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:08.952648  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:08.953011  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:08.953647  324746 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:13:08.953680  324746 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:13:08.985322  324746 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35857
	I0229 01:13:08.985829  324746 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:13:08.986378  324746 main.go:141] libmachine: Using API Version  1
	I0229 01:13:08.986412  324746 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:13:08.986779  324746 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:13:08.987038  324746 main.go:141] libmachine: (addons-600097) Calling .GetState
	I0229 01:13:08.988482  324746 main.go:141] libmachine: (addons-600097) Calling .DriverName
	I0229 01:13:08.988738  324746 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0229 01:13:08.988762  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHHostname
	I0229 01:13:08.991640  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:08.992152  324746 main.go:141] libmachine: (addons-600097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:8d:58", ip: ""} in network mk-addons-600097: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:16 +0000 UTC Type:0 Mac:52:54:00:2a:8d:58 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:addons-600097 Clientid:01:52:54:00:2a:8d:58}
	I0229 01:13:08.992182  324746 main.go:141] libmachine: (addons-600097) DBG | domain addons-600097 has defined IP address 192.168.39.181 and MAC address 52:54:00:2a:8d:58 in network mk-addons-600097
	I0229 01:13:08.992329  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHPort
	I0229 01:13:08.992527  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHKeyPath
	I0229 01:13:08.992694  324746 main.go:141] libmachine: (addons-600097) Calling .GetSSHUsername
	I0229 01:13:08.992843  324746 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/addons-600097/id_rsa Username:docker}
	I0229 01:13:09.809036  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.919247429s)
	I0229 01:13:09.809109  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:09.809119  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:09.809049  324746 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (7.914750701s)
	I0229 01:13:09.809421  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:09.809445  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:09.809456  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:09.809464  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:09.809710  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:09.809727  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:09.810334  324746 node_ready.go:35] waiting up to 6m0s for node "addons-600097" to be "Ready" ...
	I0229 01:13:09.898769  324746 node_ready.go:49] node "addons-600097" has status "Ready":"True"
	I0229 01:13:09.898805  324746 node_ready.go:38] duration metric: took 88.44004ms waiting for node "addons-600097" to be "Ready" ...
	I0229 01:13:09.898820  324746 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:13:09.945986  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:09.946016  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:09.946454  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:09.946483  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:09.946513  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:09.964826  324746 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-4pcrt" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:09.989066  324746 pod_ready.go:92] pod "coredns-5dd5756b68-4pcrt" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:09.989104  324746 pod_ready.go:81] duration metric: took 24.247475ms waiting for pod "coredns-5dd5756b68-4pcrt" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:09.989119  324746 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9fvrj" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.009199  324746 pod_ready.go:97] pod "coredns-5dd5756b68-9fvrj" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:00 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:00 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.181 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-02-29 01:13:00 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerS
tateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-02-29 01:13:02 +0000 UTC,FinishedAt:2024-02-29 01:13:09 +0000 UTC,ContainerID:cri-o://3066173b826c9ea3e073b61e5596e170c8ae5e512b01d82cc952e365248f32ea,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://3066173b826c9ea3e073b61e5596e170c8ae5e512b01d82cc952e365248f32ea Started:0xc003303280 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0229 01:13:10.009236  324746 pod_ready.go:81] duration metric: took 20.108612ms waiting for pod "coredns-5dd5756b68-9fvrj" in "kube-system" namespace to be "Ready" ...
	E0229 01:13:10.009255  324746 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-9fvrj" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:00 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:09 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:09 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-29 01:13:00 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.181 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-02-29 01:13:00 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runni
ng:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-02-29 01:13:02 +0000 UTC,FinishedAt:2024-02-29 01:13:09 +0000 UTC,ContainerID:cri-o://3066173b826c9ea3e073b61e5596e170c8ae5e512b01d82cc952e365248f32ea,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://3066173b826c9ea3e073b61e5596e170c8ae5e512b01d82cc952e365248f32ea Started:0xc003303280 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0229 01:13:10.009264  324746 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.039930  324746 pod_ready.go:92] pod "etcd-addons-600097" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:10.039960  324746 pod_ready.go:81] duration metric: took 30.686865ms waiting for pod "etcd-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.039974  324746 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.054081  324746 pod_ready.go:92] pod "kube-apiserver-addons-600097" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:10.054106  324746 pod_ready.go:81] duration metric: took 14.124935ms waiting for pod "kube-apiserver-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.054117  324746 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.224823  324746 pod_ready.go:92] pod "kube-controller-manager-addons-600097" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:10.224850  324746 pod_ready.go:81] duration metric: took 170.727451ms waiting for pod "kube-controller-manager-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.224863  324746 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9h94v" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.615408  324746 pod_ready.go:92] pod "kube-proxy-9h94v" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:10.615436  324746 pod_ready.go:81] duration metric: took 390.566786ms waiting for pod "kube-proxy-9h94v" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:10.615446  324746 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:11.013870  324746 pod_ready.go:92] pod "kube-scheduler-addons-600097" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:11.013896  324746 pod_ready.go:81] duration metric: took 398.443377ms waiting for pod "kube-scheduler-addons-600097" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:11.013913  324746 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:11.741742  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.533558773s)
	I0229 01:13:11.741820  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.741817  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.489627997s)
	I0229 01:13:11.741863  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.741864  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.805189163s)
	I0229 01:13:11.741833  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.741899  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.741917  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.741949  324746 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (9.357788514s)
	I0229 01:13:11.741866  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.449530103s)
	I0229 01:13:11.741992  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.741994  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.02187216s)
	I0229 01:13:11.742007  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742002  324746 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0229 01:13:11.741879  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742031  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742043  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742040  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.994400636s)
	I0229 01:13:11.742076  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742087  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742132  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.456915888s)
	I0229 01:13:11.742153  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742163  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742209  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.224654106s)
	I0229 01:13:11.742239  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742250  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742316  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.114193215s)
	W0229 01:13:11.742345  324746 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0229 01:13:11.742391  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.815559786s)
	I0229 01:13:11.742403  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.742420  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.742423  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.742433  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.742434  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.742438  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742444  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.742448  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742452  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742461  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742382  324746 retry.go:31] will retry after 361.34681ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0229 01:13:11.742451  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.742490  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.742498  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.742501  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.742508  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742425  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.742515  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.742517  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742522  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.742471  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.742534  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742541  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742509  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742570  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.742453  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.742604  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.742613  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.746349  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.746366  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.746375  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.746386  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.746435  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.746457  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.746482  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.746489  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.746496  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.746552  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.746577  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.746583  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.746592  324746 addons.go:470] Verifying addon ingress=true in "addons-600097"
	I0229 01:13:11.748340  324746 out.go:177] * Verifying ingress addon...
	I0229 01:13:11.746897  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.746925  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.746986  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.747005  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747026  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.747038  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747041  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747067  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.747074  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747078  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.747098  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.747102  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747116  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.747119  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747667  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.747699  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.749838  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749850  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749868  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749874  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749878  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749900  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.749915  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.749942  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749958  324746 addons.go:470] Verifying addon metrics-server=true in "addons-600097"
	I0229 01:13:11.749975  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.749943  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.750012  324746 addons.go:470] Verifying addon registry=true in "addons-600097"
	I0229 01:13:11.749947  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:11.750029  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:11.751460  324746 out.go:177] * Verifying registry addon...
	I0229 01:13:11.750175  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.750240  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.750251  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:11.750288  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:11.750789  324746 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0229 01:13:11.752635  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.752682  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:11.754000  324746 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-600097 service yakd-dashboard -n yakd-dashboard
	
	I0229 01:13:11.753290  324746 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0229 01:13:11.764863  324746 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0229 01:13:11.764893  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:11.773063  324746 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0229 01:13:11.773080  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:12.104163  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 01:13:12.234846  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.098868184s)
	I0229 01:13:12.234889  324746 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.246125084s)
	I0229 01:13:12.234920  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:12.234935  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:12.236620  324746 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 01:13:12.235257  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:12.235297  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:12.237883  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:12.237905  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:12.237918  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:12.239164  324746 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0229 01:13:12.240550  324746 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0229 01:13:12.240573  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0229 01:13:12.238215  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:12.238252  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:12.240604  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:12.240625  324746 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-600097"
	I0229 01:13:12.242387  324746 out.go:177] * Verifying csi-hostpath-driver addon...
	I0229 01:13:12.244298  324746 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0229 01:13:12.253563  324746 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0229 01:13:12.253582  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:12.264682  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:12.266782  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:12.323117  324746 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0229 01:13:12.323146  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0229 01:13:12.410200  324746 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0229 01:13:12.410238  324746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5447 bytes)
	I0229 01:13:12.490836  324746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0229 01:13:12.755271  324746 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0229 01:13:12.755297  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:12.770849  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:12.771299  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:13.023041  324746 pod_ready.go:102] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"False"
	I0229 01:13:13.327226  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:13.345669  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:13.345920  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:13.750287  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:13.759234  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:13.762140  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:14.253924  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:14.256222  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:14.261222  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:14.631430  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.527210744s)
	I0229 01:13:14.631508  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:14.631521  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:14.631854  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:14.631873  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:14.631883  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:14.631896  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:14.632158  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:14.632202  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:14.632243  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:14.802365  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:14.802752  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:14.802924  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:15.096186  324746 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.605295689s)
	I0229 01:13:15.096247  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:15.096267  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:15.096609  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:15.096634  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:15.096643  324746 main.go:141] libmachine: Making call to close driver server
	I0229 01:13:15.096644  324746 main.go:141] libmachine: (addons-600097) DBG | Closing plugin on server side
	I0229 01:13:15.096651  324746 main.go:141] libmachine: (addons-600097) Calling .Close
	I0229 01:13:15.096905  324746 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:13:15.096920  324746 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:13:15.097951  324746 addons.go:470] Verifying addon gcp-auth=true in "addons-600097"
	I0229 01:13:15.099717  324746 out.go:177] * Verifying gcp-auth addon...
	I0229 01:13:15.102076  324746 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0229 01:13:15.103622  324746 pod_ready.go:102] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"False"
	I0229 01:13:15.122633  324746 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0229 01:13:15.122652  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:15.250736  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:15.256707  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:15.266927  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:15.607302  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:15.753087  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:15.759765  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:15.764841  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:16.107385  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:16.251349  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:16.258039  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:16.261543  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:16.606454  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:16.751405  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:16.757056  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:16.759640  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:17.105808  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:17.250491  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:17.270295  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:17.271381  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:17.520743  324746 pod_ready.go:102] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"False"
	I0229 01:13:17.606014  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:18.011553  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:18.014977  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:18.015115  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:18.107411  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:18.250942  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:18.257352  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:18.259641  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:18.606911  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:18.750144  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:18.757007  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:18.759949  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:19.106743  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:19.250210  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:19.257484  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:19.260610  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:19.521201  324746 pod_ready.go:102] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"False"
	I0229 01:13:19.606923  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:19.751598  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:20.243615  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:20.245010  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:20.248396  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:20.254354  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:20.257531  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:20.259581  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:20.606277  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:20.750095  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:20.756931  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:20.760097  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:21.106288  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:21.250390  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:21.257964  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:21.260202  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:21.612545  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:21.750045  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:21.757593  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:21.765374  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:22.020725  324746 pod_ready.go:102] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"False"
	I0229 01:13:22.106795  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:22.250080  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:22.257648  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:22.263861  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:22.606937  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:22.751379  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:22.757297  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:22.761035  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:23.106326  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:23.251520  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:23.256607  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:23.263991  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:23.611385  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:23.752583  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:23.758440  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:23.760133  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:24.106500  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:24.268566  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:24.269504  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:24.274238  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:24.520905  324746 pod_ready.go:102] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"False"
	I0229 01:13:24.606053  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:24.751163  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:24.757530  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:24.760138  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:25.028556  324746 pod_ready.go:92] pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:25.028591  324746 pod_ready.go:81] duration metric: took 14.014669893s waiting for pod "metrics-server-69cf46c98-hrq8h" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:25.028606  324746 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-qctgj" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:25.037194  324746 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-qctgj" in "kube-system" namespace has status "Ready":"True"
	I0229 01:13:25.037218  324746 pod_ready.go:81] duration metric: took 8.604188ms waiting for pod "nvidia-device-plugin-daemonset-qctgj" in "kube-system" namespace to be "Ready" ...
	I0229 01:13:25.037235  324746 pod_ready.go:38] duration metric: took 15.138402406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:13:25.037251  324746 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:13:25.037302  324746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:13:25.056671  324746 api_server.go:72] duration metric: took 23.165477386s to wait for apiserver process to appear ...
	I0229 01:13:25.056705  324746 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:13:25.056734  324746 api_server.go:253] Checking apiserver healthz at https://192.168.39.181:8443/healthz ...
	I0229 01:13:25.065681  324746 api_server.go:279] https://192.168.39.181:8443/healthz returned 200:
	ok
	I0229 01:13:25.067172  324746 api_server.go:141] control plane version: v1.28.4
	I0229 01:13:25.067204  324746 api_server.go:131] duration metric: took 10.490036ms to wait for apiserver health ...
	I0229 01:13:25.067215  324746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:13:25.079926  324746 system_pods.go:59] 18 kube-system pods found
	I0229 01:13:25.079953  324746 system_pods.go:61] "coredns-5dd5756b68-4pcrt" [3eb43d6f-14c6-42de-be44-4441b9f518ff] Running
	I0229 01:13:25.079960  324746 system_pods.go:61] "csi-hostpath-attacher-0" [d0230873-4868-4afc-9928-0dd97f8361e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0229 01:13:25.079976  324746 system_pods.go:61] "csi-hostpath-resizer-0" [96d4e7b6-6974-4d78-a074-175d8b634226] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0229 01:13:25.079989  324746 system_pods.go:61] "csi-hostpathplugin-qp8h8" [d8ff48fd-0803-4e5a-8d3d-71b3c9399207] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0229 01:13:25.080000  324746 system_pods.go:61] "etcd-addons-600097" [807e5dc6-85a5-40d2-8fc3-de8285d05e68] Running
	I0229 01:13:25.080010  324746 system_pods.go:61] "kube-apiserver-addons-600097" [b5798f77-a50f-4e7a-b51a-7529a8e8152b] Running
	I0229 01:13:25.080019  324746 system_pods.go:61] "kube-controller-manager-addons-600097" [683a75b8-f632-4aa2-9375-8c0a3f3a443f] Running
	I0229 01:13:25.080030  324746 system_pods.go:61] "kube-ingress-dns-minikube" [bd4a21c2-8e95-404a-a7db-ee307a4d8899] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0229 01:13:25.080038  324746 system_pods.go:61] "kube-proxy-9h94v" [86903f1f-0d36-4812-acde-9145f651a025] Running
	I0229 01:13:25.080044  324746 system_pods.go:61] "kube-scheduler-addons-600097" [6cb4c51f-d912-471b-8c97-54c360e21d0b] Running
	I0229 01:13:25.080047  324746 system_pods.go:61] "metrics-server-69cf46c98-hrq8h" [e7098420-28d2-4a6b-a93d-4fefa31359b3] Running
	I0229 01:13:25.080053  324746 system_pods.go:61] "nvidia-device-plugin-daemonset-qctgj" [a6d1f69b-373d-49c1-a1da-9b03d99cc13c] Running
	I0229 01:13:25.080060  324746 system_pods.go:61] "registry-proxy-rntnp" [48e9e81a-42f9-4d1d-9354-285750cd1bd8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0229 01:13:25.080067  324746 system_pods.go:61] "registry-q4qbx" [44db4128-7109-4402-9de5-49bec8724d9f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0229 01:13:25.080073  324746 system_pods.go:61] "snapshot-controller-58dbcc7b99-9b2bf" [87a91b45-bf66-4d8c-a507-e1308617e2e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 01:13:25.080082  324746 system_pods.go:61] "snapshot-controller-58dbcc7b99-rt5hl" [c3d0545b-72bb-4f39-a718-5aa937bc37cf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 01:13:25.080086  324746 system_pods.go:61] "storage-provisioner" [c0d595aa-7503-497b-8719-8a82ca333df3] Running
	I0229 01:13:25.080092  324746 system_pods.go:61] "tiller-deploy-7b677967b9-w6sfn" [d68c9fec-87de-4b51-b793-1fce3f10efe2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0229 01:13:25.080104  324746 system_pods.go:74] duration metric: took 12.881249ms to wait for pod list to return data ...
	I0229 01:13:25.080119  324746 default_sa.go:34] waiting for default service account to be created ...
	I0229 01:13:25.083819  324746 default_sa.go:45] found service account: "default"
	I0229 01:13:25.083839  324746 default_sa.go:55] duration metric: took 3.70817ms for default service account to be created ...
	I0229 01:13:25.083849  324746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 01:13:25.097945  324746 system_pods.go:86] 18 kube-system pods found
	I0229 01:13:25.097972  324746 system_pods.go:89] "coredns-5dd5756b68-4pcrt" [3eb43d6f-14c6-42de-be44-4441b9f518ff] Running
	I0229 01:13:25.097980  324746 system_pods.go:89] "csi-hostpath-attacher-0" [d0230873-4868-4afc-9928-0dd97f8361e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0229 01:13:25.097986  324746 system_pods.go:89] "csi-hostpath-resizer-0" [96d4e7b6-6974-4d78-a074-175d8b634226] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0229 01:13:25.097995  324746 system_pods.go:89] "csi-hostpathplugin-qp8h8" [d8ff48fd-0803-4e5a-8d3d-71b3c9399207] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0229 01:13:25.098003  324746 system_pods.go:89] "etcd-addons-600097" [807e5dc6-85a5-40d2-8fc3-de8285d05e68] Running
	I0229 01:13:25.098008  324746 system_pods.go:89] "kube-apiserver-addons-600097" [b5798f77-a50f-4e7a-b51a-7529a8e8152b] Running
	I0229 01:13:25.098013  324746 system_pods.go:89] "kube-controller-manager-addons-600097" [683a75b8-f632-4aa2-9375-8c0a3f3a443f] Running
	I0229 01:13:25.098019  324746 system_pods.go:89] "kube-ingress-dns-minikube" [bd4a21c2-8e95-404a-a7db-ee307a4d8899] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0229 01:13:25.098029  324746 system_pods.go:89] "kube-proxy-9h94v" [86903f1f-0d36-4812-acde-9145f651a025] Running
	I0229 01:13:25.098036  324746 system_pods.go:89] "kube-scheduler-addons-600097" [6cb4c51f-d912-471b-8c97-54c360e21d0b] Running
	I0229 01:13:25.098040  324746 system_pods.go:89] "metrics-server-69cf46c98-hrq8h" [e7098420-28d2-4a6b-a93d-4fefa31359b3] Running
	I0229 01:13:25.098048  324746 system_pods.go:89] "nvidia-device-plugin-daemonset-qctgj" [a6d1f69b-373d-49c1-a1da-9b03d99cc13c] Running
	I0229 01:13:25.098053  324746 system_pods.go:89] "registry-proxy-rntnp" [48e9e81a-42f9-4d1d-9354-285750cd1bd8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0229 01:13:25.098064  324746 system_pods.go:89] "registry-q4qbx" [44db4128-7109-4402-9de5-49bec8724d9f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0229 01:13:25.098070  324746 system_pods.go:89] "snapshot-controller-58dbcc7b99-9b2bf" [87a91b45-bf66-4d8c-a507-e1308617e2e8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 01:13:25.098076  324746 system_pods.go:89] "snapshot-controller-58dbcc7b99-rt5hl" [c3d0545b-72bb-4f39-a718-5aa937bc37cf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 01:13:25.098081  324746 system_pods.go:89] "storage-provisioner" [c0d595aa-7503-497b-8719-8a82ca333df3] Running
	I0229 01:13:25.098086  324746 system_pods.go:89] "tiller-deploy-7b677967b9-w6sfn" [d68c9fec-87de-4b51-b793-1fce3f10efe2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0229 01:13:25.098092  324746 system_pods.go:126] duration metric: took 14.237704ms to wait for k8s-apps to be running ...
	I0229 01:13:25.098100  324746 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 01:13:25.098144  324746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:13:25.115753  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:25.116555  324746 system_svc.go:56] duration metric: took 18.44499ms WaitForService to wait for kubelet.
	I0229 01:13:25.116587  324746 kubeadm.go:581] duration metric: took 23.225396491s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 01:13:25.116614  324746 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:13:25.121703  324746 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:13:25.121763  324746 node_conditions.go:123] node cpu capacity is 2
	I0229 01:13:25.121783  324746 node_conditions.go:105] duration metric: took 5.162231ms to run NodePressure ...
	I0229 01:13:25.121801  324746 start.go:228] waiting for startup goroutines ...
	I0229 01:13:25.251490  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:25.262009  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:25.263859  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:25.606940  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:25.752739  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:25.756403  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:25.760191  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:26.108293  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:26.250578  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:26.256896  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:26.259774  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:26.607159  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:26.752011  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:26.757296  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:26.759574  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:27.105730  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:27.251019  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:27.258871  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:27.262721  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:27.606364  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:27.752212  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:27.756792  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:27.760258  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:28.106949  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:28.250860  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:28.257572  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:28.260438  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:28.794907  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:28.814334  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:28.816357  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:28.818673  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:29.106424  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:29.250850  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:29.258780  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:29.261282  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:29.606481  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:29.750711  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:29.756768  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:29.759853  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:30.106618  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:30.250122  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:30.257397  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:30.268705  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:30.606898  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:30.750556  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:30.757454  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:30.760729  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:31.107663  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:31.251971  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:31.260164  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:31.263332  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:31.606540  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:31.750869  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:31.757265  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:31.761363  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:32.106145  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:32.250746  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:32.257694  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:32.260393  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:32.606493  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:32.750748  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:32.757131  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:32.761475  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:33.106159  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:33.251610  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:33.260627  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:33.268869  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:33.606251  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:33.750843  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:33.757353  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:33.759120  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 01:13:34.107109  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:34.250615  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:34.256846  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:34.259516  324746 kapi.go:107] duration metric: took 22.506222093s to wait for kubernetes.io/minikube-addons=registry ...
	I0229 01:13:34.606373  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:34.751095  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:34.757870  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:35.106049  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:35.250752  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:35.256833  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:35.606414  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:35.757108  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:35.757727  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:36.352549  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:36.354689  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:36.356728  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:36.606332  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:36.751145  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:36.757026  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:37.106864  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:37.250822  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:37.256561  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:37.606009  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:37.758030  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:37.758990  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:38.107469  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:38.251772  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:38.257020  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:38.607065  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:38.751948  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:38.757872  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:39.106704  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:39.250715  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:39.256652  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:39.607503  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:39.749671  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:39.756376  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:40.105677  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:40.250047  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:40.257224  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:40.606366  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:40.750500  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:40.757953  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:41.185397  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:41.254066  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:41.268901  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:41.606076  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:41.751459  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:41.756279  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:42.108235  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:42.250355  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:42.257722  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:42.606599  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:42.750966  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:42.757045  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:43.106952  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:43.250590  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:43.257083  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:43.606206  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:43.753730  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:43.757466  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:44.106889  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:44.251201  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:44.257182  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:44.606441  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:44.751482  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:44.758157  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:45.106320  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:45.251905  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:45.258412  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:45.607871  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:45.751394  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:45.758676  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:46.107120  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:46.250485  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:46.256789  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:46.606435  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:46.750577  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:46.756797  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:47.106544  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:47.251889  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:47.259261  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:47.607676  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:47.751031  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:47.757116  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:48.106293  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:48.261055  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:48.261293  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:48.606516  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:48.750689  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:48.757405  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:49.106113  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:49.250909  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:49.258488  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:49.606187  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:49.750661  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:49.756499  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:50.106253  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:50.251487  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:50.261586  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:50.606602  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:50.750935  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:50.756998  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:51.108742  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:51.281206  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:51.282237  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:51.606577  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:51.750669  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:51.756484  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:52.106049  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:52.251105  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:52.257438  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:52.606189  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:52.751231  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:52.757327  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:53.106598  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:53.250248  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:53.257636  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:53.607334  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:53.752012  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:53.757107  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:54.107407  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:54.251906  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:54.257999  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:54.605762  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:54.751944  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:54.763769  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:55.107508  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:55.253356  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:55.257649  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:55.606590  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:55.754861  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:55.757378  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:56.108621  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:56.250920  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:56.257029  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:56.607050  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:56.751421  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:56.756044  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:57.106876  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:57.252530  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:57.257628  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:57.609196  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:57.751504  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:57.757463  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:58.106738  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:58.250777  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:58.257263  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:58.607577  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:58.749924  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:58.758151  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:59.109965  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:59.250258  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:59.257294  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:13:59.606919  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:13:59.752332  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:13:59.757170  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:00.106853  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:00.250407  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:00.257205  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:00.612346  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:00.750702  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:00.758083  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:01.107140  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:01.253015  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:01.257551  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:01.606089  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:01.750973  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:01.756751  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:02.106170  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:02.250711  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:02.256536  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:02.606008  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:02.752380  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:02.767710  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:03.108805  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:03.250026  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:03.257029  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:03.606711  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:03.819870  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:03.822361  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:04.107237  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:04.264066  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:04.264156  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:04.608817  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:04.750565  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:04.756260  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:05.107846  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:05.250152  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:05.257339  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:05.605734  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:05.752353  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:05.759268  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:06.106857  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:06.250982  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:06.257253  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:06.607922  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:06.751054  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:06.758418  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:07.107442  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:07.251274  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:07.257596  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:07.606742  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:07.751222  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:07.757434  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:08.106064  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:08.251984  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:08.257270  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:08.606843  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:08.750884  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:08.757128  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:09.108466  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:09.251404  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:09.257100  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:09.607364  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:09.751173  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:09.757357  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:10.106707  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:10.250956  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:10.256672  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:10.622941  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:10.751421  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:10.757181  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:11.186632  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:11.252908  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:11.257387  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:11.607034  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:11.751937  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:11.759072  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:12.107073  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:12.250329  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:12.257261  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:12.606984  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:12.750653  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:12.756755  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:13.105709  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:13.255929  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:13.266793  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:13.605572  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:13.749514  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:13.756889  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:14.106113  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:14.251066  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:14.257725  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:14.606093  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:14.751744  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:14.756450  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:15.106774  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:15.250192  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:15.257208  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:15.607341  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:15.752577  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:15.760503  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:16.109800  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:16.259077  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:16.261376  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:16.607558  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:16.751578  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:16.760192  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:17.108331  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:17.250581  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:17.261604  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:17.606955  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:17.764231  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:17.764302  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:18.111111  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:18.257009  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:18.262645  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:18.606995  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:18.751445  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:18.757807  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:19.112674  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:19.251142  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:19.258090  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:19.606217  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:19.751344  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:19.765406  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:20.112110  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:20.257464  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:20.258025  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:20.606366  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:20.751986  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:20.756763  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:21.107410  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:21.251209  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:21.257861  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:21.606266  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:21.751025  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:21.756799  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:22.106005  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:22.258838  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:22.267315  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:22.606966  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:22.750791  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:22.757569  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:23.107022  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:23.250847  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:23.257194  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:23.606957  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:23.751925  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:23.758067  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:24.106478  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:24.250702  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:24.257204  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:24.607255  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:24.750891  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:24.756787  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:25.106308  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:25.250894  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:25.257343  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:25.607208  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:25.750137  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:25.757152  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:26.108922  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:26.250717  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:26.256895  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:26.606540  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:26.749612  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:26.757707  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:27.107456  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:27.250625  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:27.258151  324746 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 01:14:27.606974  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:27.751171  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:27.756989  324746 kapi.go:107] duration metric: took 1m16.006198048s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0229 01:14:28.194555  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:28.257655  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:28.605727  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:28.751025  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:29.108186  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:29.251342  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:29.607528  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:29.750452  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:30.113305  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:30.251151  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:30.606475  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:30.750681  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:31.105628  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:31.252800  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:31.606206  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:31.751524  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:32.107547  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 01:14:32.253273  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:32.606835  324746 kapi.go:107] duration metric: took 1m17.504755183s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0229 01:14:32.608784  324746 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-600097 cluster.
	I0229 01:14:32.610166  324746 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0229 01:14:32.611443  324746 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0229 01:14:32.764643  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:33.250415  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:33.750003  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:34.252541  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:34.752475  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:35.250406  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:35.750080  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:36.319162  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:36.750442  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:37.250468  324746 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 01:14:37.753022  324746 kapi.go:107] duration metric: took 1m25.508722462s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0229 01:14:37.754862  324746 out.go:177] * Enabled addons: default-storageclass, cloud-spanner, storage-provisioner-rancher, inspektor-gadget, ingress-dns, helm-tiller, metrics-server, storage-provisioner, nvidia-device-plugin, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0229 01:14:37.756099  324746 addons.go:505] enable addons completed in 1m36.376936292s: enabled=[default-storageclass cloud-spanner storage-provisioner-rancher inspektor-gadget ingress-dns helm-tiller metrics-server storage-provisioner nvidia-device-plugin yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0229 01:14:37.756153  324746 start.go:233] waiting for cluster config update ...
	I0229 01:14:37.756179  324746 start.go:242] writing updated cluster config ...
	I0229 01:14:37.756501  324746 ssh_runner.go:195] Run: rm -f paused
	I0229 01:14:37.810781  324746 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 01:14:37.812619  324746 out.go:177] * Done! kubectl is now configured to use "addons-600097" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.140625674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709169322140600072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:537271,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b9ac831-2164-41aa-9cbf-33ed3f90a9cd name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.141526692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=049c3921-f03d-4621-8324-74e14e74652f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.141672788Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=049c3921-f03d-4621-8324-74e14e74652f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.142749937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b38457446bf37e4da1da3cc2743c5834b8842eee9e838bed6e5eedd1839de0e,PodSandboxId:277d2d22c15180a2786dec4f2a0e9035e3819b7e2878382a16301c46f4e3c57a,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a,State:CONTAINER_RUNNING,CreatedAt:1709169317899189351,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74ced382-368b-4f97-9124-f2ba65827e5d,},Annotations:map[string]string{io.kubernetes.container.hash: 715342cd,io.kubernetes.container.ports: [{\"name\":\"htt
p-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bac6187ed3b16ca3f8647be7b34bda8b4488cd9c3a05d0c7b486a144feb3629,PodSandboxId:5d394a514b98f19f9aae6ee22403004a54967a5ff2accb5e0f950441d6c2c043,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709169307569657306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d714fecf-09b1-4cd0-b639-1b12d34e13b3,},Annotations:map[string]string{io.kubernet
es.container.hash: 177a8c47,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f81a8bb038c6266de77e850abff8c44cf41bacba76a62198943b2418c9538ec,PodSandboxId:5addd942b069ef65793671db3cef76c70f4373e934083e9b4fd1c8b0208fc32e,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1709169295866929232,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5314df9-0423-42a
8-b65c-9d11bd4aaad7,},Annotations:map[string]string{io.kubernetes.container.hash: d11f504e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b1fd3daefc3d7a56319c7e2b9723e1779a3ac339bc1e5c42337fa7bd245461,PodSandboxId:f244b5d8d992741a352d31aa541213ff8de4a397038d364b6c57222d0d4ab5ac,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1709169292197712056,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14ec017e-17ea-432
f-9ff3-5713476309f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6480b1e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b30283c0036d873a2151b0d0f6fd51cf03db2e105653f29bf1c37a9c5897ba,PodSandboxId:742c44e0ec2b2426086224b2a3da8f35cf139c82ef2d25155e6e3dc7effb5677,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1709169290365813609,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-46cdb420-a06c-4c86-b1c5-0196b03f5f20,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 39
be76dd-81b8-4e2d-9eab-459737e2f877,},Annotations:map[string]string{io.kubernetes.container.hash: 8c1c9d72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4308f1d7f3cae89b80626c71dee7c4f1ad6723c96b2dac907c5f8bc775a6cd09,PodSandboxId:4cab364552e31d66138fbfba5fcb6a4ba7bb3026fc3c3cd43b5ae47d2b8cd80b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:538721340ded10875f4710cad688c70e5d0ecb4dcd5e7d0c161f301f36f79414,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f57d9401f8d42f986df300f0c69192fc41da28ccc8d797829467780db3dd741,State:CONTAINER_EXITED,CreatedAt:1709169287309899869,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b289fc35-8197-
4641-8555-e11426bb231c,},Annotations:map[string]string{io.kubernetes.container.hash: 5e66bcfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039bb03e99f6f829dae6f4dea04b378fb372ad6a25df647b4664e0a61adc2022,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1709169276448243162,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 61524924,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d571e9fd1b36158aa6286dfe2f1aeb2f9faa619ab8426a10d8ce90518905b485,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1709169274592664946,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 9c514ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943776a733fe4110bdc2e9af5ec247c4a89898bc9c1e31116a07a55b9b989944,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1709169272969753017,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name:
csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: a7ca3346,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e52b89c446a5aca696d9d9c8e0653b2a045ee5d7a02e149e8c7e01d4dd28c7,PodSandboxId:7eec2df001eb22efaa4e1f8b1669a9c7999ad46b0ca394028dca31a71fd34727,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709169272178893814,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kuber
netes.pod.name: gcp-auth-5f6b4f85fd-zccgt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0b9bb93b-6a21-45a5-b329-3f0735b3b8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4510e833,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec701ad058bfff255bc041395c154ba8f7df1f2ceb9283fd9d92d1ddf1cc1cf,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,Crea
tedAt:1709169268481714826,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 1f56aa49,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc89ee1505e8df19d546303a482e04335a40a13fa88c957799de90cfcd94277b,PodSandboxId:be98ab948a62be86f577bd72a87799a10fbb0e0aadc9b3f759f257708f6ce607,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:0939639a1f338a9eaaa490fd38b4a7881e47a7fd1a473baf8749ce15952b55b8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:2bdab7410148a3a4ba2e59594a812b56c285682f52ba2a03e11d6e4b5fb67e06,State:CONTAINER_RUNNING,CreatedAt:1709169266510949643,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7967645744-xvf4n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9b37e4c-eb54-46a7-8758-c79789004c90,},Annotations:map[string]string{io.kubernetes.container.hash: 106373c4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:
1577eb0b1959d4fb8e91ec0700ab13dbaace403220ae715433f6f6cc1a90b3df,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1709169259155591920,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 8d50c894,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3191890fbdf1b9800170ee22da971c8297eb1644c8cefcc8cbfb3138c3fee04f,PodSandboxId:21ea03ede6c75bc22bd5228a86ce4b20c28d437664920e6fc4c2815f619c1310,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169256230025346,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: gcp-auth-certs-patch-dcll4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d8720d21-d0c3-4f91-b60b-aca54716b879,},Annotations:map[string]string{io.kubernetes.container.hash: 3c9a35b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:989b9a0efd10f0b5f21f2d2e2e772fd991c28062a819304732afe27d96b9b3aa,PodSandboxId:324588c75de25905a831341f3754e9a81bcdc2ffbf4181b735f0359d5fcaa14d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169256149225337,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: gcp-auth-certs-create-h4kzm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 77a1934b-e34c-475a-b5f6-4b05686d3b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2bfbbd03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5631fce45e2dfb0a87bbcfe51149a066915c9d85eff925a0d740594f9306be1d,PodSandboxId:aa3621ab1f6d9f4a17122630065f6ae5d9afdee4c711f5218a0541f331ae8551,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169255976962086,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rgrk5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: deed00d9-1a9e-46d5-a2ee-8bc5d56d7392,},Annotations:map[string]string{io.kubernetes.container.hash: ae3861f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:542e233b9f43e9acd11be9aafd7e387b61e4626fda7f271a62511921b0d969fa,PodSandboxId:134b69408dfd93844f3420b6069375041d3371ef7893910e7437ab57c1e29ef4,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1709169255801411397,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96d4e7b6-6974-4d78-a074-175d8b634226,},Annotations:map[string]string{io.kubernetes.container.hash: 441baee6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eee5e70b7463dd497eb1543ae853da62232de16569a1547e45bed4c8d8e0acf,PodSandboxId:fe4b9f54145fd0f1dd97f4e1470f7e7b511b7341f8874f884930f6e28c416712,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169254385825660,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgdfs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4f370dc0-0da1-44ab-bfc6-54766f7b0faa,},Annotations:map[string]string{io.kubernetes.container.hash: 1872fdd3,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e32673060f991886cacbcbbae7ad855889d795763a517e438aca91cb311a1ce,PodSandboxId:8e8c4094c5c838443a680a92336ce571e0cc386ca18ca712944c6409a943273d,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1709169252711292702,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0230873-4868-4afc-9928-0dd97f8361e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6f461862,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b209d8b2b10583762596b91a06ad36fdf4d60ac0580325b6b2b6f66b74477eb,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1709169251294004293,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48
fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: c4b8c385,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d30e8d6de7844e3dae2d82a8cc55d41260975b5d0cf18e8426a6d593ee5a5a0,PodSandboxId:d1ba4610a0ac272edb3bc74bd16cb90e7bb5bd9d4f7704a98c1d7bd568663cdf,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1709169245923379977,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-9b
2bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a91b45-bf66-4d8c-a507-e1308617e2e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b479daf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f535ea597342237a3e9c3f6a60946af3e038990447cc01be1cb35b95606a08e9,PodSandboxId:20c9a4844a4b67b70d7b01a0de7627ddc39d1e4402a274067f1dbf004c008e9d,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1709169245781161801,Labels:map[string]string{io.kubernetes.container.name: volume-sna
pshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-rt5hl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d0545b-72bb-4f39-a718-5aa937bc37cf,},Annotations:map[string]string{io.kubernetes.container.hash: 96b0424,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47278bfe07925227df31de5583b4df6361a65ae6493a34035cde22fe653823e,PodSandboxId:8d04fc535e877efdfdab69a7d7f6b76e57a1e72b17f1ef5c3e82cd9ef8d59168,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1709169231358794360,Labels:map[string]string{io.kubernetes
.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-qmvcb,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e5aa7bf3-4864-4a99-89f8-7130c9effa51,},Annotations:map[string]string{io.kubernetes.container.hash: 5468fdce,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4fe2a0e225aec9fc359a02e6279b71005f718984b09c8284fab5586596b78a2,PodSandboxId:b4a219839439ea14fa544b732481faada48e3c40631ba6358594c67290670ead,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bc
e25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1709169224773494958,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd4a21c2-8e95-404a-a7db-ee307a4d8899,},Annotations:map[string]string{io.kubernetes.container.hash: f72b21e8,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9704c1a775207dacdf60716c6037d07493e878532b1a09fa2d3fb47d621b818,PodSandboxId:96bf42d01fbe3a627dc33ba845027293cbf1bc383f9927cbef1035e2a5b9425d,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:19c696958fe8a63676ba26fa57114c33149
168bbedfea246fc55e0e665c03098,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0745b508898e2aa68f29a3c7f21023d03feace165b2430bc2297d250e65009e0,State:CONTAINER_RUNNING,CreatedAt:1709169201576638973,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-qctgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d1f69b-373d-49c1-a1da-9b03d99cc13c,},Annotations:map[string]string{io.kubernetes.container.hash: d84ded9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2a2ea68ffb9c400b6cf9fe38eee3ecee40957dc05b127ebb08c7df6de024e1,PodSandboxId:4bbf1c6c47cb8b71a2d5de3778a1791f8437fa41ca78606c45f45265438eb384,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709169189662856745,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0d595aa-7503-497b-8719-8a82ca333df3,},Annotations:map[string]string{io.kubernetes.container.hash: 44d11f0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bbd76b157e12126054b522ab7cb2345412bc2a4948f8f4d5eb0d7eed7a47b,PodSandboxId:3e817c0064e886705c681abf7feac7da74cb4fd0eb58a55721456335e0b129be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d67
2c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709169182098500488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4pcrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb43d6f-14c6-42de-be44-4441b9f518ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3f5c45d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04b3d1fabb914b27d0d0b03918098e702230a9210cfb28a7fac5e69d894e252a,PodSandb
oxId:169a417eb22246e4aea67a98b788053f2ea370de772d4cf139dccba326b2f8a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709169181427274373,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9h94v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86903f1f-0d36-4812-acde-9145f651a025,},Annotations:map[string]string{io.kubernetes.container.hash: e580e67a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e26b9c70186d9761efbc13ad8f6cbbf8e52ab8ce4a433d365f57e3a6f7fefd,PodSandboxId:767a97fffcdcf9dd63fd5bf7280b
40d6aa8f4bf9a7e02349c2eac5e92ca840bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709169161724035962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4b8244708dd77863bdc2940d7ca944,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7c9686,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9425dce69f24a188c47d5add3884b5f287c7e60724dab40e088ac0e0b54c0708,PodSandboxId:d27b0fcd09df0c9a82aaa84162a76b4791efc3e87ac318a51bb39b2d9351b21b,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709169161780781421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dcea44ad8c6fad4c7dcf5c120398c8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5009adb028085947daae1ba7fef2a0f2ae731e3c2e5efeed752043cdc2f9d0ea,PodSandboxId:2d274f82c931eaed86bdb6464b77452b457114fcb9b020f29fb441975e5bbbf8,Metadata:&ContainerMetadata
{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709169161658848741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754d725e342de23a8503217d677b914c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8918fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4954df1875ea7603adf964a5a13be727dc7fe5a12501c20f689a1fb5d72ecb65,PodSandboxId:bce95c30109346af6bcab530d85e2d136e88ed4a90e8ca8c7cd250bf1c3cacc7,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709169161599906726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f40b94ca78b79eee6c772a400b09a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=049c3921-f03d-4621-8324-74e14e74652f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.190826470Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9ea6ca9-8574-4216-b261-a5df147b272b name=/runtime.v1.RuntimeService/Version
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.190902267Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9ea6ca9-8574-4216-b261-a5df147b272b name=/runtime.v1.RuntimeService/Version
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.192276397Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4eb8c925-682d-4e29-8288-1a8a15df1e35 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.194239077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709169322194204896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:537271,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4eb8c925-682d-4e29-8288-1a8a15df1e35 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.195220272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=206f4f10-e112-45a6-a5a5-5c5ef7c635e7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.195281171Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=206f4f10-e112-45a6-a5a5-5c5ef7c635e7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.196212902Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b38457446bf37e4da1da3cc2743c5834b8842eee9e838bed6e5eedd1839de0e,PodSandboxId:277d2d22c15180a2786dec4f2a0e9035e3819b7e2878382a16301c46f4e3c57a,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a,State:CONTAINER_RUNNING,CreatedAt:1709169317899189351,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74ced382-368b-4f97-9124-f2ba65827e5d,},Annotations:map[string]string{io.kubernetes.container.hash: 715342cd,io.kubernetes.container.ports: [{\"name\":\"htt
p-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bac6187ed3b16ca3f8647be7b34bda8b4488cd9c3a05d0c7b486a144feb3629,PodSandboxId:5d394a514b98f19f9aae6ee22403004a54967a5ff2accb5e0f950441d6c2c043,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709169307569657306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d714fecf-09b1-4cd0-b639-1b12d34e13b3,},Annotations:map[string]string{io.kubernet
es.container.hash: 177a8c47,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f81a8bb038c6266de77e850abff8c44cf41bacba76a62198943b2418c9538ec,PodSandboxId:5addd942b069ef65793671db3cef76c70f4373e934083e9b4fd1c8b0208fc32e,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1709169295866929232,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5314df9-0423-42a
8-b65c-9d11bd4aaad7,},Annotations:map[string]string{io.kubernetes.container.hash: d11f504e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b1fd3daefc3d7a56319c7e2b9723e1779a3ac339bc1e5c42337fa7bd245461,PodSandboxId:f244b5d8d992741a352d31aa541213ff8de4a397038d364b6c57222d0d4ab5ac,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1709169292197712056,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14ec017e-17ea-432
f-9ff3-5713476309f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6480b1e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b30283c0036d873a2151b0d0f6fd51cf03db2e105653f29bf1c37a9c5897ba,PodSandboxId:742c44e0ec2b2426086224b2a3da8f35cf139c82ef2d25155e6e3dc7effb5677,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1709169290365813609,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-46cdb420-a06c-4c86-b1c5-0196b03f5f20,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 39
be76dd-81b8-4e2d-9eab-459737e2f877,},Annotations:map[string]string{io.kubernetes.container.hash: 8c1c9d72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4308f1d7f3cae89b80626c71dee7c4f1ad6723c96b2dac907c5f8bc775a6cd09,PodSandboxId:4cab364552e31d66138fbfba5fcb6a4ba7bb3026fc3c3cd43b5ae47d2b8cd80b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:538721340ded10875f4710cad688c70e5d0ecb4dcd5e7d0c161f301f36f79414,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f57d9401f8d42f986df300f0c69192fc41da28ccc8d797829467780db3dd741,State:CONTAINER_EXITED,CreatedAt:1709169287309899869,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b289fc35-8197-
4641-8555-e11426bb231c,},Annotations:map[string]string{io.kubernetes.container.hash: 5e66bcfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039bb03e99f6f829dae6f4dea04b378fb372ad6a25df647b4664e0a61adc2022,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1709169276448243162,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 61524924,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d571e9fd1b36158aa6286dfe2f1aeb2f9faa619ab8426a10d8ce90518905b485,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1709169274592664946,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 9c514ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943776a733fe4110bdc2e9af5ec247c4a89898bc9c1e31116a07a55b9b989944,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1709169272969753017,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name:
csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: a7ca3346,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e52b89c446a5aca696d9d9c8e0653b2a045ee5d7a02e149e8c7e01d4dd28c7,PodSandboxId:7eec2df001eb22efaa4e1f8b1669a9c7999ad46b0ca394028dca31a71fd34727,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709169272178893814,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kuber
netes.pod.name: gcp-auth-5f6b4f85fd-zccgt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0b9bb93b-6a21-45a5-b329-3f0735b3b8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4510e833,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec701ad058bfff255bc041395c154ba8f7df1f2ceb9283fd9d92d1ddf1cc1cf,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,Crea
tedAt:1709169268481714826,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 1f56aa49,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc89ee1505e8df19d546303a482e04335a40a13fa88c957799de90cfcd94277b,PodSandboxId:be98ab948a62be86f577bd72a87799a10fbb0e0aadc9b3f759f257708f6ce607,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:0939639a1f338a9eaaa490fd38b4a7881e47a7fd1a473baf8749ce15952b55b8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:2bdab7410148a3a4ba2e59594a812b56c285682f52ba2a03e11d6e4b5fb67e06,State:CONTAINER_RUNNING,CreatedAt:1709169266510949643,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7967645744-xvf4n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9b37e4c-eb54-46a7-8758-c79789004c90,},Annotations:map[string]string{io.kubernetes.container.hash: 106373c4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:
1577eb0b1959d4fb8e91ec0700ab13dbaace403220ae715433f6f6cc1a90b3df,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1709169259155591920,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 8d50c894,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3191890fbdf1b9800170ee22da971c8297eb1644c8cefcc8cbfb3138c3fee04f,PodSandboxId:21ea03ede6c75bc22bd5228a86ce4b20c28d437664920e6fc4c2815f619c1310,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169256230025346,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: gcp-auth-certs-patch-dcll4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d8720d21-d0c3-4f91-b60b-aca54716b879,},Annotations:map[string]string{io.kubernetes.container.hash: 3c9a35b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:989b9a0efd10f0b5f21f2d2e2e772fd991c28062a819304732afe27d96b9b3aa,PodSandboxId:324588c75de25905a831341f3754e9a81bcdc2ffbf4181b735f0359d5fcaa14d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169256149225337,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: gcp-auth-certs-create-h4kzm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 77a1934b-e34c-475a-b5f6-4b05686d3b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2bfbbd03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5631fce45e2dfb0a87bbcfe51149a066915c9d85eff925a0d740594f9306be1d,PodSandboxId:aa3621ab1f6d9f4a17122630065f6ae5d9afdee4c711f5218a0541f331ae8551,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169255976962086,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rgrk5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: deed00d9-1a9e-46d5-a2ee-8bc5d56d7392,},Annotations:map[string]string{io.kubernetes.container.hash: ae3861f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:542e233b9f43e9acd11be9aafd7e387b61e4626fda7f271a62511921b0d969fa,PodSandboxId:134b69408dfd93844f3420b6069375041d3371ef7893910e7437ab57c1e29ef4,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1709169255801411397,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96d4e7b6-6974-4d78-a074-175d8b634226,},Annotations:map[string]string{io.kubernetes.container.hash: 441baee6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eee5e70b7463dd497eb1543ae853da62232de16569a1547e45bed4c8d8e0acf,PodSandboxId:fe4b9f54145fd0f1dd97f4e1470f7e7b511b7341f8874f884930f6e28c416712,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169254385825660,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgdfs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4f370dc0-0da1-44ab-bfc6-54766f7b0faa,},Annotations:map[string]string{io.kubernetes.container.hash: 1872fdd3,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e32673060f991886cacbcbbae7ad855889d795763a517e438aca91cb311a1ce,PodSandboxId:8e8c4094c5c838443a680a92336ce571e0cc386ca18ca712944c6409a943273d,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1709169252711292702,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0230873-4868-4afc-9928-0dd97f8361e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6f461862,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b209d8b2b10583762596b91a06ad36fdf4d60ac0580325b6b2b6f66b74477eb,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1709169251294004293,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48
fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: c4b8c385,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d30e8d6de7844e3dae2d82a8cc55d41260975b5d0cf18e8426a6d593ee5a5a0,PodSandboxId:d1ba4610a0ac272edb3bc74bd16cb90e7bb5bd9d4f7704a98c1d7bd568663cdf,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1709169245923379977,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-9b
2bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a91b45-bf66-4d8c-a507-e1308617e2e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b479daf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f535ea597342237a3e9c3f6a60946af3e038990447cc01be1cb35b95606a08e9,PodSandboxId:20c9a4844a4b67b70d7b01a0de7627ddc39d1e4402a274067f1dbf004c008e9d,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1709169245781161801,Labels:map[string]string{io.kubernetes.container.name: volume-sna
pshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-rt5hl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d0545b-72bb-4f39-a718-5aa937bc37cf,},Annotations:map[string]string{io.kubernetes.container.hash: 96b0424,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47278bfe07925227df31de5583b4df6361a65ae6493a34035cde22fe653823e,PodSandboxId:8d04fc535e877efdfdab69a7d7f6b76e57a1e72b17f1ef5c3e82cd9ef8d59168,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1709169231358794360,Labels:map[string]string{io.kubernetes
.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-qmvcb,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e5aa7bf3-4864-4a99-89f8-7130c9effa51,},Annotations:map[string]string{io.kubernetes.container.hash: 5468fdce,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4fe2a0e225aec9fc359a02e6279b71005f718984b09c8284fab5586596b78a2,PodSandboxId:b4a219839439ea14fa544b732481faada48e3c40631ba6358594c67290670ead,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bc
e25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1709169224773494958,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd4a21c2-8e95-404a-a7db-ee307a4d8899,},Annotations:map[string]string{io.kubernetes.container.hash: f72b21e8,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9704c1a775207dacdf60716c6037d07493e878532b1a09fa2d3fb47d621b818,PodSandboxId:96bf42d01fbe3a627dc33ba845027293cbf1bc383f9927cbef1035e2a5b9425d,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:19c696958fe8a63676ba26fa57114c33149
168bbedfea246fc55e0e665c03098,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0745b508898e2aa68f29a3c7f21023d03feace165b2430bc2297d250e65009e0,State:CONTAINER_RUNNING,CreatedAt:1709169201576638973,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-qctgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d1f69b-373d-49c1-a1da-9b03d99cc13c,},Annotations:map[string]string{io.kubernetes.container.hash: d84ded9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2a2ea68ffb9c400b6cf9fe38eee3ecee40957dc05b127ebb08c7df6de024e1,PodSandboxId:4bbf1c6c47cb8b71a2d5de3778a1791f8437fa41ca78606c45f45265438eb384,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709169189662856745,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0d595aa-7503-497b-8719-8a82ca333df3,},Annotations:map[string]string{io.kubernetes.container.hash: 44d11f0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bbd76b157e12126054b522ab7cb2345412bc2a4948f8f4d5eb0d7eed7a47b,PodSandboxId:3e817c0064e886705c681abf7feac7da74cb4fd0eb58a55721456335e0b129be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d67
2c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709169182098500488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4pcrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb43d6f-14c6-42de-be44-4441b9f518ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3f5c45d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04b3d1fabb914b27d0d0b03918098e702230a9210cfb28a7fac5e69d894e252a,PodSandb
oxId:169a417eb22246e4aea67a98b788053f2ea370de772d4cf139dccba326b2f8a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709169181427274373,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9h94v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86903f1f-0d36-4812-acde-9145f651a025,},Annotations:map[string]string{io.kubernetes.container.hash: e580e67a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e26b9c70186d9761efbc13ad8f6cbbf8e52ab8ce4a433d365f57e3a6f7fefd,PodSandboxId:767a97fffcdcf9dd63fd5bf7280b
40d6aa8f4bf9a7e02349c2eac5e92ca840bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709169161724035962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4b8244708dd77863bdc2940d7ca944,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7c9686,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9425dce69f24a188c47d5add3884b5f287c7e60724dab40e088ac0e0b54c0708,PodSandboxId:d27b0fcd09df0c9a82aaa84162a76b4791efc3e87ac318a51bb39b2d9351b21b,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709169161780781421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dcea44ad8c6fad4c7dcf5c120398c8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5009adb028085947daae1ba7fef2a0f2ae731e3c2e5efeed752043cdc2f9d0ea,PodSandboxId:2d274f82c931eaed86bdb6464b77452b457114fcb9b020f29fb441975e5bbbf8,Metadata:&ContainerMetadata
{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709169161658848741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754d725e342de23a8503217d677b914c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8918fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4954df1875ea7603adf964a5a13be727dc7fe5a12501c20f689a1fb5d72ecb65,PodSandboxId:bce95c30109346af6bcab530d85e2d136e88ed4a90e8ca8c7cd250bf1c3cacc7,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709169161599906726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f40b94ca78b79eee6c772a400b09a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=206f4f10-e112-45a6-a5a5-5c5ef7c635e7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.233598885Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd7791cf-7e62-401f-9597-ab13e3d3d17e name=/runtime.v1.RuntimeService/Version
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.233704677Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd7791cf-7e62-401f-9597-ab13e3d3d17e name=/runtime.v1.RuntimeService/Version
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.234965371Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58c33a00-01a7-4129-a948-ff9288beb759 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.236643220Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709169322236612631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:537271,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58c33a00-01a7-4129-a948-ff9288beb759 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.237275423Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be582b01-5632-4b51-a0f1-96470084a7c8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.237419124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be582b01-5632-4b51-a0f1-96470084a7c8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.238216211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b38457446bf37e4da1da3cc2743c5834b8842eee9e838bed6e5eedd1839de0e,PodSandboxId:277d2d22c15180a2786dec4f2a0e9035e3819b7e2878382a16301c46f4e3c57a,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a,State:CONTAINER_RUNNING,CreatedAt:1709169317899189351,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74ced382-368b-4f97-9124-f2ba65827e5d,},Annotations:map[string]string{io.kubernetes.container.hash: 715342cd,io.kubernetes.container.ports: [{\"name\":\"htt
p-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bac6187ed3b16ca3f8647be7b34bda8b4488cd9c3a05d0c7b486a144feb3629,PodSandboxId:5d394a514b98f19f9aae6ee22403004a54967a5ff2accb5e0f950441d6c2c043,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709169307569657306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d714fecf-09b1-4cd0-b639-1b12d34e13b3,},Annotations:map[string]string{io.kubernet
es.container.hash: 177a8c47,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f81a8bb038c6266de77e850abff8c44cf41bacba76a62198943b2418c9538ec,PodSandboxId:5addd942b069ef65793671db3cef76c70f4373e934083e9b4fd1c8b0208fc32e,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1709169295866929232,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5314df9-0423-42a
8-b65c-9d11bd4aaad7,},Annotations:map[string]string{io.kubernetes.container.hash: d11f504e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b1fd3daefc3d7a56319c7e2b9723e1779a3ac339bc1e5c42337fa7bd245461,PodSandboxId:f244b5d8d992741a352d31aa541213ff8de4a397038d364b6c57222d0d4ab5ac,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1709169292197712056,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14ec017e-17ea-432
f-9ff3-5713476309f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6480b1e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b30283c0036d873a2151b0d0f6fd51cf03db2e105653f29bf1c37a9c5897ba,PodSandboxId:742c44e0ec2b2426086224b2a3da8f35cf139c82ef2d25155e6e3dc7effb5677,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1709169290365813609,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-46cdb420-a06c-4c86-b1c5-0196b03f5f20,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 39
be76dd-81b8-4e2d-9eab-459737e2f877,},Annotations:map[string]string{io.kubernetes.container.hash: 8c1c9d72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4308f1d7f3cae89b80626c71dee7c4f1ad6723c96b2dac907c5f8bc775a6cd09,PodSandboxId:4cab364552e31d66138fbfba5fcb6a4ba7bb3026fc3c3cd43b5ae47d2b8cd80b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:538721340ded10875f4710cad688c70e5d0ecb4dcd5e7d0c161f301f36f79414,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f57d9401f8d42f986df300f0c69192fc41da28ccc8d797829467780db3dd741,State:CONTAINER_EXITED,CreatedAt:1709169287309899869,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b289fc35-8197-
4641-8555-e11426bb231c,},Annotations:map[string]string{io.kubernetes.container.hash: 5e66bcfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039bb03e99f6f829dae6f4dea04b378fb372ad6a25df647b4664e0a61adc2022,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1709169276448243162,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 61524924,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d571e9fd1b36158aa6286dfe2f1aeb2f9faa619ab8426a10d8ce90518905b485,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1709169274592664946,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 9c514ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943776a733fe4110bdc2e9af5ec247c4a89898bc9c1e31116a07a55b9b989944,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1709169272969753017,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name:
csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: a7ca3346,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e52b89c446a5aca696d9d9c8e0653b2a045ee5d7a02e149e8c7e01d4dd28c7,PodSandboxId:7eec2df001eb22efaa4e1f8b1669a9c7999ad46b0ca394028dca31a71fd34727,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709169272178893814,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kuber
netes.pod.name: gcp-auth-5f6b4f85fd-zccgt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0b9bb93b-6a21-45a5-b329-3f0735b3b8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4510e833,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec701ad058bfff255bc041395c154ba8f7df1f2ceb9283fd9d92d1ddf1cc1cf,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,Crea
tedAt:1709169268481714826,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 1f56aa49,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc89ee1505e8df19d546303a482e04335a40a13fa88c957799de90cfcd94277b,PodSandboxId:be98ab948a62be86f577bd72a87799a10fbb0e0aadc9b3f759f257708f6ce607,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:0939639a1f338a9eaaa490fd38b4a7881e47a7fd1a473baf8749ce15952b55b8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:2bdab7410148a3a4ba2e59594a812b56c285682f52ba2a03e11d6e4b5fb67e06,State:CONTAINER_RUNNING,CreatedAt:1709169266510949643,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7967645744-xvf4n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9b37e4c-eb54-46a7-8758-c79789004c90,},Annotations:map[string]string{io.kubernetes.container.hash: 106373c4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:
1577eb0b1959d4fb8e91ec0700ab13dbaace403220ae715433f6f6cc1a90b3df,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1709169259155591920,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 8d50c894,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3191890fbdf1b9800170ee22da971c8297eb1644c8cefcc8cbfb3138c3fee04f,PodSandboxId:21ea03ede6c75bc22bd5228a86ce4b20c28d437664920e6fc4c2815f619c1310,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169256230025346,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: gcp-auth-certs-patch-dcll4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d8720d21-d0c3-4f91-b60b-aca54716b879,},Annotations:map[string]string{io.kubernetes.container.hash: 3c9a35b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:989b9a0efd10f0b5f21f2d2e2e772fd991c28062a819304732afe27d96b9b3aa,PodSandboxId:324588c75de25905a831341f3754e9a81bcdc2ffbf4181b735f0359d5fcaa14d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169256149225337,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: gcp-auth-certs-create-h4kzm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 77a1934b-e34c-475a-b5f6-4b05686d3b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2bfbbd03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5631fce45e2dfb0a87bbcfe51149a066915c9d85eff925a0d740594f9306be1d,PodSandboxId:aa3621ab1f6d9f4a17122630065f6ae5d9afdee4c711f5218a0541f331ae8551,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169255976962086,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rgrk5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: deed00d9-1a9e-46d5-a2ee-8bc5d56d7392,},Annotations:map[string]string{io.kubernetes.container.hash: ae3861f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:542e233b9f43e9acd11be9aafd7e387b61e4626fda7f271a62511921b0d969fa,PodSandboxId:134b69408dfd93844f3420b6069375041d3371ef7893910e7437ab57c1e29ef4,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1709169255801411397,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96d4e7b6-6974-4d78-a074-175d8b634226,},Annotations:map[string]string{io.kubernetes.container.hash: 441baee6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eee5e70b7463dd497eb1543ae853da62232de16569a1547e45bed4c8d8e0acf,PodSandboxId:fe4b9f54145fd0f1dd97f4e1470f7e7b511b7341f8874f884930f6e28c416712,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169254385825660,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgdfs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4f370dc0-0da1-44ab-bfc6-54766f7b0faa,},Annotations:map[string]string{io.kubernetes.container.hash: 1872fdd3,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e32673060f991886cacbcbbae7ad855889d795763a517e438aca91cb311a1ce,PodSandboxId:8e8c4094c5c838443a680a92336ce571e0cc386ca18ca712944c6409a943273d,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1709169252711292702,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0230873-4868-4afc-9928-0dd97f8361e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6f461862,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b209d8b2b10583762596b91a06ad36fdf4d60ac0580325b6b2b6f66b74477eb,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1709169251294004293,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48
fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: c4b8c385,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d30e8d6de7844e3dae2d82a8cc55d41260975b5d0cf18e8426a6d593ee5a5a0,PodSandboxId:d1ba4610a0ac272edb3bc74bd16cb90e7bb5bd9d4f7704a98c1d7bd568663cdf,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1709169245923379977,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-9b
2bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a91b45-bf66-4d8c-a507-e1308617e2e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b479daf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f535ea597342237a3e9c3f6a60946af3e038990447cc01be1cb35b95606a08e9,PodSandboxId:20c9a4844a4b67b70d7b01a0de7627ddc39d1e4402a274067f1dbf004c008e9d,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1709169245781161801,Labels:map[string]string{io.kubernetes.container.name: volume-sna
pshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-rt5hl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d0545b-72bb-4f39-a718-5aa937bc37cf,},Annotations:map[string]string{io.kubernetes.container.hash: 96b0424,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47278bfe07925227df31de5583b4df6361a65ae6493a34035cde22fe653823e,PodSandboxId:8d04fc535e877efdfdab69a7d7f6b76e57a1e72b17f1ef5c3e82cd9ef8d59168,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1709169231358794360,Labels:map[string]string{io.kubernetes
.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-qmvcb,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e5aa7bf3-4864-4a99-89f8-7130c9effa51,},Annotations:map[string]string{io.kubernetes.container.hash: 5468fdce,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4fe2a0e225aec9fc359a02e6279b71005f718984b09c8284fab5586596b78a2,PodSandboxId:b4a219839439ea14fa544b732481faada48e3c40631ba6358594c67290670ead,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bc
e25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1709169224773494958,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd4a21c2-8e95-404a-a7db-ee307a4d8899,},Annotations:map[string]string{io.kubernetes.container.hash: f72b21e8,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9704c1a775207dacdf60716c6037d07493e878532b1a09fa2d3fb47d621b818,PodSandboxId:96bf42d01fbe3a627dc33ba845027293cbf1bc383f9927cbef1035e2a5b9425d,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:19c696958fe8a63676ba26fa57114c33149
168bbedfea246fc55e0e665c03098,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0745b508898e2aa68f29a3c7f21023d03feace165b2430bc2297d250e65009e0,State:CONTAINER_RUNNING,CreatedAt:1709169201576638973,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-qctgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d1f69b-373d-49c1-a1da-9b03d99cc13c,},Annotations:map[string]string{io.kubernetes.container.hash: d84ded9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2a2ea68ffb9c400b6cf9fe38eee3ecee40957dc05b127ebb08c7df6de024e1,PodSandboxId:4bbf1c6c47cb8b71a2d5de3778a1791f8437fa41ca78606c45f45265438eb384,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709169189662856745,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0d595aa-7503-497b-8719-8a82ca333df3,},Annotations:map[string]string{io.kubernetes.container.hash: 44d11f0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bbd76b157e12126054b522ab7cb2345412bc2a4948f8f4d5eb0d7eed7a47b,PodSandboxId:3e817c0064e886705c681abf7feac7da74cb4fd0eb58a55721456335e0b129be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d67
2c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709169182098500488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4pcrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb43d6f-14c6-42de-be44-4441b9f518ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3f5c45d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04b3d1fabb914b27d0d0b03918098e702230a9210cfb28a7fac5e69d894e252a,PodSandb
oxId:169a417eb22246e4aea67a98b788053f2ea370de772d4cf139dccba326b2f8a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709169181427274373,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9h94v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86903f1f-0d36-4812-acde-9145f651a025,},Annotations:map[string]string{io.kubernetes.container.hash: e580e67a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e26b9c70186d9761efbc13ad8f6cbbf8e52ab8ce4a433d365f57e3a6f7fefd,PodSandboxId:767a97fffcdcf9dd63fd5bf7280b
40d6aa8f4bf9a7e02349c2eac5e92ca840bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709169161724035962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4b8244708dd77863bdc2940d7ca944,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7c9686,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9425dce69f24a188c47d5add3884b5f287c7e60724dab40e088ac0e0b54c0708,PodSandboxId:d27b0fcd09df0c9a82aaa84162a76b4791efc3e87ac318a51bb39b2d9351b21b,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709169161780781421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dcea44ad8c6fad4c7dcf5c120398c8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5009adb028085947daae1ba7fef2a0f2ae731e3c2e5efeed752043cdc2f9d0ea,PodSandboxId:2d274f82c931eaed86bdb6464b77452b457114fcb9b020f29fb441975e5bbbf8,Metadata:&ContainerMetadata
{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709169161658848741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754d725e342de23a8503217d677b914c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8918fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4954df1875ea7603adf964a5a13be727dc7fe5a12501c20f689a1fb5d72ecb65,PodSandboxId:bce95c30109346af6bcab530d85e2d136e88ed4a90e8ca8c7cd250bf1c3cacc7,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709169161599906726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f40b94ca78b79eee6c772a400b09a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be582b01-5632-4b51-a0f1-96470084a7c8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.284870407Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=41634f28-54af-4eaa-9b7b-cf7e121d14e7 name=/runtime.v1.RuntimeService/Version
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.284944629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=41634f28-54af-4eaa-9b7b-cf7e121d14e7 name=/runtime.v1.RuntimeService/Version
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.285964865Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6685c26a-bc2b-4857-beb7-c4ccda9df338 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.287367149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709169322287333683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:537271,},InodesUsed:&UInt64Value{Value:194,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6685c26a-bc2b-4857-beb7-c4ccda9df338 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.288417261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=515052c0-4981-455b-b6b5-bd2a51b5791a name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.288481981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=515052c0-4981-455b-b6b5-bd2a51b5791a name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:15:22 addons-600097 crio[679]: time="2024-02-29 01:15:22.289177224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2b38457446bf37e4da1da3cc2743c5834b8842eee9e838bed6e5eedd1839de0e,PodSandboxId:277d2d22c15180a2786dec4f2a0e9035e3819b7e2878382a16301c46f4e3c57a,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a,State:CONTAINER_RUNNING,CreatedAt:1709169317899189351,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74ced382-368b-4f97-9124-f2ba65827e5d,},Annotations:map[string]string{io.kubernetes.container.hash: 715342cd,io.kubernetes.container.ports: [{\"name\":\"htt
p-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bac6187ed3b16ca3f8647be7b34bda8b4488cd9c3a05d0c7b486a144feb3629,PodSandboxId:5d394a514b98f19f9aae6ee22403004a54967a5ff2accb5e0f950441d6c2c043,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1709169307569657306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d714fecf-09b1-4cd0-b639-1b12d34e13b3,},Annotations:map[string]string{io.kubernet
es.container.hash: 177a8c47,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f81a8bb038c6266de77e850abff8c44cf41bacba76a62198943b2418c9538ec,PodSandboxId:5addd942b069ef65793671db3cef76c70f4373e934083e9b4fd1c8b0208fc32e,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1709169295866929232,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5314df9-0423-42a
8-b65c-9d11bd4aaad7,},Annotations:map[string]string{io.kubernetes.container.hash: d11f504e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49b1fd3daefc3d7a56319c7e2b9723e1779a3ac339bc1e5c42337fa7bd245461,PodSandboxId:f244b5d8d992741a352d31aa541213ff8de4a397038d364b6c57222d0d4ab5ac,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a,State:CONTAINER_EXITED,CreatedAt:1709169292197712056,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 14ec017e-17ea-432
f-9ff3-5713476309f1,},Annotations:map[string]string{io.kubernetes.container.hash: 6480b1e2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0b30283c0036d873a2151b0d0f6fd51cf03db2e105653f29bf1c37a9c5897ba,PodSandboxId:742c44e0ec2b2426086224b2a3da8f35cf139c82ef2d25155e6e3dc7effb5677,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1709169290365813609,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-46cdb420-a06c-4c86-b1c5-0196b03f5f20,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 39
be76dd-81b8-4e2d-9eab-459737e2f877,},Annotations:map[string]string{io.kubernetes.container.hash: 8c1c9d72,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4308f1d7f3cae89b80626c71dee7c4f1ad6723c96b2dac907c5f8bc775a6cd09,PodSandboxId:4cab364552e31d66138fbfba5fcb6a4ba7bb3026fc3c3cd43b5ae47d2b8cd80b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:538721340ded10875f4710cad688c70e5d0ecb4dcd5e7d0c161f301f36f79414,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f57d9401f8d42f986df300f0c69192fc41da28ccc8d797829467780db3dd741,State:CONTAINER_EXITED,CreatedAt:1709169287309899869,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b289fc35-8197-
4641-8555-e11426bb231c,},Annotations:map[string]string{io.kubernetes.container.hash: 5e66bcfd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039bb03e99f6f829dae6f4dea04b378fb372ad6a25df647b4664e0a61adc2022,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1709169276448243162,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 61524924,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d571e9fd1b36158aa6286dfe2f1aeb2f9faa619ab8426a10d8ce90518905b485,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1709169274592664946,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 9c514ddc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943776a733fe4110bdc2e9af5ec247c4a89898bc9c1e31116a07a55b9b989944,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1709169272969753017,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name:
csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: a7ca3346,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e52b89c446a5aca696d9d9c8e0653b2a045ee5d7a02e149e8c7e01d4dd28c7,PodSandboxId:7eec2df001eb22efaa4e1f8b1669a9c7999ad46b0ca394028dca31a71fd34727,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc1caf62c3016e48310e3e283eb11c9ecd4da232e9176c095794541232492b7c,State:CONTAINER_RUNNING,CreatedAt:1709169272178893814,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kuber
netes.pod.name: gcp-auth-5f6b4f85fd-zccgt,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 0b9bb93b-6a21-45a5-b329-3f0735b3b8cf,},Annotations:map[string]string{io.kubernetes.container.hash: 4510e833,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ec701ad058bfff255bc041395c154ba8f7df1f2ceb9283fd9d92d1ddf1cc1cf,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,Crea
tedAt:1709169268481714826,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 1f56aa49,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc89ee1505e8df19d546303a482e04335a40a13fa88c957799de90cfcd94277b,PodSandboxId:be98ab948a62be86f577bd72a87799a10fbb0e0aadc9b3f759f257708f6ce607,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:0939639a1f338a9eaaa490fd38b4a7881e47a7fd1a473baf8749ce15952b55b8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:2bdab7410148a3a4ba2e59594a812b56c285682f52ba2a03e11d6e4b5fb67e06,State:CONTAINER_RUNNING,CreatedAt:1709169266510949643,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7967645744-xvf4n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9b37e4c-eb54-46a7-8758-c79789004c90,},Annotations:map[string]string{io.kubernetes.container.hash: 106373c4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:
1577eb0b1959d4fb8e91ec0700ab13dbaace403220ae715433f6f6cc1a90b3df,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1709169259155591920,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: 8d50c894,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3191890fbdf1b9800170ee22da971c8297eb1644c8cefcc8cbfb3138c3fee04f,PodSandboxId:21ea03ede6c75bc22bd5228a86ce4b20c28d437664920e6fc4c2815f619c1310,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169256230025346,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: gcp-auth-certs-patch-dcll4,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d8720d21-d0c3-4f91-b60b-aca54716b879,},Annotations:map[string]string{io.kubernetes.container.hash: 3c9a35b9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMess
agePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:989b9a0efd10f0b5f21f2d2e2e772fd991c28062a819304732afe27d96b9b3aa,PodSandboxId:324588c75de25905a831341f3754e9a81bcdc2ffbf4181b735f0359d5fcaa14d,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169256149225337,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: gcp-auth-certs-create-h4kzm,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 77a1934b-e34c-475a-b5f6-4b05686d3b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2bfbbd03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5631fce45e2dfb0a87bbcfe51149a066915c9d85eff925a0d740594f9306be1d,PodSandboxId:aa3621ab1f6d9f4a17122630065f6ae5d9afdee4c711f5218a0541f331ae8551,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169255976962086,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rgrk5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: deed00d9-1a9e-46d5-a2ee-8bc5d56d7392,},Annotations:map[string]string{io.kubernetes.container.hash: ae3861f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:542e233b9f43e9acd11be9aafd7e387b61e4626fda7f271a62511921b0d969fa,PodSandboxId:134b69408dfd93844f3420b6069375041d3371ef7893910e7437ab57c1e29ef4,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1709169255801411397,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96d4e7b6-6974-4d78-a074-175d8b634226,},Annotations:map[string]string{io.kubernetes.container.hash: 441baee6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eee5e70b7463dd497eb1543ae853da62232de16569a1547e45bed4c8d8e0acf,PodSandboxId:fe4b9f54145fd0f1dd97f4e1470f7e7b511b7341f8874f884930f6e28c416712,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:eb825d2bb76b9bd44057dcec57a768cfda70562cb08d84eb201107a17b86b87b,State:CONTAINER_EXITED,CreatedAt:1709169254385825660,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgdfs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4f370dc0-0da1-44ab-bfc6-54766f7b0faa,},Annotations:map[string]string{io.kubernetes.container.hash: 1872fdd3,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e32673060f991886cacbcbbae7ad855889d795763a517e438aca91cb311a1ce,PodSandboxId:8e8c4094c5c838443a680a92336ce571e0cc386ca18ca712944c6409a943273d,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1709169252711292702,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0230873-4868-4afc-9928-0dd97f8361e4,},Annotations:map[string]string{io.kubernetes.container.hash: 6f461862,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b209d8b2b10583762596b91a06ad36fdf4d60ac0580325b6b2b6f66b74477eb,PodSandboxId:a9da780fe95629410010bd1d626b527244f567a4f8b17f370996ba2421497417,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1709169251294004293,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-qp8h8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ff48
fd-0803-4e5a-8d3d-71b3c9399207,},Annotations:map[string]string{io.kubernetes.container.hash: c4b8c385,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d30e8d6de7844e3dae2d82a8cc55d41260975b5d0cf18e8426a6d593ee5a5a0,PodSandboxId:d1ba4610a0ac272edb3bc74bd16cb90e7bb5bd9d4f7704a98c1d7bd568663cdf,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1709169245923379977,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-9b
2bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87a91b45-bf66-4d8c-a507-e1308617e2e8,},Annotations:map[string]string{io.kubernetes.container.hash: 4b479daf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f535ea597342237a3e9c3f6a60946af3e038990447cc01be1cb35b95606a08e9,PodSandboxId:20c9a4844a4b67b70d7b01a0de7627ddc39d1e4402a274067f1dbf004c008e9d,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1709169245781161801,Labels:map[string]string{io.kubernetes.container.name: volume-sna
pshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-rt5hl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d0545b-72bb-4f39-a718-5aa937bc37cf,},Annotations:map[string]string{io.kubernetes.container.hash: 96b0424,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d47278bfe07925227df31de5583b4df6361a65ae6493a34035cde22fe653823e,PodSandboxId:8d04fc535e877efdfdab69a7d7f6b76e57a1e72b17f1ef5c3e82cd9ef8d59168,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1709169231358794360,Labels:map[string]string{io.kubernetes
.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-qmvcb,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e5aa7bf3-4864-4a99-89f8-7130c9effa51,},Annotations:map[string]string{io.kubernetes.container.hash: 5468fdce,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4fe2a0e225aec9fc359a02e6279b71005f718984b09c8284fab5586596b78a2,PodSandboxId:b4a219839439ea14fa544b732481faada48e3c40631ba6358594c67290670ead,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bc
e25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1709169224773494958,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd4a21c2-8e95-404a-a7db-ee307a4d8899,},Annotations:map[string]string{io.kubernetes.container.hash: f72b21e8,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9704c1a775207dacdf60716c6037d07493e878532b1a09fa2d3fb47d621b818,PodSandboxId:96bf42d01fbe3a627dc33ba845027293cbf1bc383f9927cbef1035e2a5b9425d,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:19c696958fe8a63676ba26fa57114c33149
168bbedfea246fc55e0e665c03098,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0745b508898e2aa68f29a3c7f21023d03feace165b2430bc2297d250e65009e0,State:CONTAINER_RUNNING,CreatedAt:1709169201576638973,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-qctgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6d1f69b-373d-49c1-a1da-9b03d99cc13c,},Annotations:map[string]string{io.kubernetes.container.hash: d84ded9f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a2a2ea68ffb9c400b6cf9fe38eee3ecee40957dc05b127ebb08c7df6de024e1,PodSandboxId:4bbf1c6c47cb8b71a2d5de3778a1791f8437fa41ca78606c45f45265438eb384,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709169189662856745,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0d595aa-7503-497b-8719-8a82ca333df3,},Annotations:map[string]string{io.kubernetes.container.hash: 44d11f0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:212bbd76b157e12126054b522ab7cb2345412bc2a4948f8f4d5eb0d7eed7a47b,PodSandboxId:3e817c0064e886705c681abf7feac7da74cb4fd0eb58a55721456335e0b129be,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d67
2c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709169182098500488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-4pcrt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eb43d6f-14c6-42de-be44-4441b9f518ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3f5c45d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04b3d1fabb914b27d0d0b03918098e702230a9210cfb28a7fac5e69d894e252a,PodSandb
oxId:169a417eb22246e4aea67a98b788053f2ea370de772d4cf139dccba326b2f8a7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709169181427274373,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9h94v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86903f1f-0d36-4812-acde-9145f651a025,},Annotations:map[string]string{io.kubernetes.container.hash: e580e67a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6e26b9c70186d9761efbc13ad8f6cbbf8e52ab8ce4a433d365f57e3a6f7fefd,PodSandboxId:767a97fffcdcf9dd63fd5bf7280b
40d6aa8f4bf9a7e02349c2eac5e92ca840bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709169161724035962,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4b8244708dd77863bdc2940d7ca944,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7c9686,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9425dce69f24a188c47d5add3884b5f287c7e60724dab40e088ac0e0b54c0708,PodSandboxId:d27b0fcd09df0c9a82aaa84162a76b4791efc3e87ac318a51bb39b2d9351b21b,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709169161780781421,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84dcea44ad8c6fad4c7dcf5c120398c8,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5009adb028085947daae1ba7fef2a0f2ae731e3c2e5efeed752043cdc2f9d0ea,PodSandboxId:2d274f82c931eaed86bdb6464b77452b457114fcb9b020f29fb441975e5bbbf8,Metadata:&ContainerMetadata
{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709169161658848741,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754d725e342de23a8503217d677b914c,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8918fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4954df1875ea7603adf964a5a13be727dc7fe5a12501c20f689a1fb5d72ecb65,PodSandboxId:bce95c30109346af6bcab530d85e2d136e88ed4a90e8ca8c7cd250bf1c3cacc7,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709169161599906726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-600097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f40b94ca78b79eee6c772a400b09a2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=515052c0-4981-455b-b6b5-bd2a51b5791a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	2b38457446bf3       docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71                                              4 seconds ago        Running             task-pv-container                        0                   277d2d22c1518       task-pv-pod
	2bac6187ed3b1       docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                                              14 seconds ago       Running             nginx                                    0                   5d394a514b98f       nginx
	6f81a8bb038c6       docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                                26 seconds ago       Exited              helm-test                                0                   5addd942b069e       helm-test
	49b1fd3daefc3       gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee                                          30 seconds ago       Exited              registry-test                            0                   f244b5d8d9927       registry-test
	a0b30283c0036       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                                             32 seconds ago       Exited              helper-pod                               0                   742c44e0ec2b2       helper-pod-delete-pvc-46cdb420-a06c-4c86-b1c5-0196b03f5f20
	4308f1d7f3cae       docker.io/library/busybox@sha256:538721340ded10875f4710cad688c70e5d0ecb4dcd5e7d0c161f301f36f79414                                            35 seconds ago       Exited              busybox                                  0                   4cab364552e31       test-local-path
	039bb03e99f6f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          45 seconds ago       Running             csi-snapshotter                          0                   a9da780fe9562       csi-hostpathplugin-qp8h8
	d571e9fd1b361       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          47 seconds ago       Running             csi-provisioner                          0                   a9da780fe9562       csi-hostpathplugin-qp8h8
	943776a733fe4       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            49 seconds ago       Running             liveness-probe                           0                   a9da780fe9562       csi-hostpathplugin-qp8h8
	f6e52b89c446a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                                 50 seconds ago       Running             gcp-auth                                 0                   7eec2df001eb2       gcp-auth-5f6b4f85fd-zccgt
	4ec701ad058bf       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           53 seconds ago       Running             hostpath                                 0                   a9da780fe9562       csi-hostpathplugin-qp8h8
	dc89ee1505e8d       registry.k8s.io/ingress-nginx/controller@sha256:0939639a1f338a9eaaa490fd38b4a7881e47a7fd1a473baf8749ce15952b55b8                             55 seconds ago       Running             controller                               0                   be98ab948a62b       ingress-nginx-controller-7967645744-xvf4n
	1577eb0b1959d       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                About a minute ago   Running             node-driver-registrar                    0                   a9da780fe9562       csi-hostpathplugin-qp8h8
	3191890fbdf1b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e                   About a minute ago   Exited              patch                                    0                   21ea03ede6c75       gcp-auth-certs-patch-dcll4
	989b9a0efd10f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e                   About a minute ago   Exited              create                                   0                   324588c75de25       gcp-auth-certs-create-h4kzm
	5631fce45e2df       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e                   About a minute ago   Exited              patch                                    0                   aa3621ab1f6d9       ingress-nginx-admission-patch-rgrk5
	542e233b9f43e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   134b69408dfd9       csi-hostpath-resizer-0
	2eee5e70b7463       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:17c938714b4ca5a5a3a875565ecc8e56b94ebc8c132ba24c08dd1ef3c92bc39e                   About a minute ago   Exited              create                                   0                   fe4b9f54145fd       ingress-nginx-admission-create-sgdfs
	6e32673060f99       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   8e8c4094c5c83       csi-hostpath-attacher-0
	9b209d8b2b105       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   a9da780fe9562       csi-hostpathplugin-qp8h8
	6d30e8d6de784       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   d1ba4610a0ac2       snapshot-controller-58dbcc7b99-9b2bf
	f535ea5973422       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   20c9a4844a4b6       snapshot-controller-58dbcc7b99-rt5hl
	d47278bfe0792       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                              About a minute ago   Running             yakd                                     0                   8d04fc535e877       yakd-dashboard-9947fc6bf-qmvcb
	d4fe2a0e225ae       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             About a minute ago   Running             minikube-ingress-dns                     0                   b4a219839439e       kube-ingress-dns-minikube
	b9704c1a77520       nvcr.io/nvidia/k8s-device-plugin@sha256:19c696958fe8a63676ba26fa57114c33149168bbedfea246fc55e0e665c03098                                     2 minutes ago        Running             nvidia-device-plugin-ctr                 0                   96bf42d01fbe3       nvidia-device-plugin-daemonset-qctgj
	8a2a2ea68ffb9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             2 minutes ago        Running             storage-provisioner                      0                   4bbf1c6c47cb8       storage-provisioner
	212bbd76b157e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                             2 minutes ago        Running             coredns                                  0                   3e817c0064e88       coredns-5dd5756b68-4pcrt
	04b3d1fabb914       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                                             2 minutes ago        Running             kube-proxy                               0                   169a417eb2224       kube-proxy-9h94v
	9425dce69f24a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                                             2 minutes ago        Running             kube-scheduler                           0                   d27b0fcd09df0       kube-scheduler-addons-600097
	f6e26b9c70186       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                             2 minutes ago        Running             etcd                                     0                   767a97fffcdcf       etcd-addons-600097
	5009adb028085       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                                             2 minutes ago        Running             kube-apiserver                           0                   2d274f82c931e       kube-apiserver-addons-600097
	4954df1875ea7       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                                             2 minutes ago        Running             kube-controller-manager                  0                   bce95c3010934       kube-controller-manager-addons-600097
	
	
	==> coredns [212bbd76b157e12126054b522ab7cb2345412bc2a4948f8f4d5eb0d7eed7a47b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:56750 - 50252 "HINFO IN 6012791314514810438.1269741809087642760. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021058283s
	
	
	==> describe nodes <==
	Name:               addons-600097
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-600097
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=addons-600097
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T01_12_47_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-600097
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-600097"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 01:12:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-600097
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 01:15:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 01:15:20 +0000   Thu, 29 Feb 2024 01:12:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 01:15:20 +0000   Thu, 29 Feb 2024 01:12:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 01:15:20 +0000   Thu, 29 Feb 2024 01:12:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 01:15:20 +0000   Thu, 29 Feb 2024 01:12:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    addons-600097
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a38fcc474c74a92a46fb62fa427ef29
	  System UUID:                3a38fcc4-74c7-4a92-a46f-b62fa427ef29
	  Boot ID:                    cd993e19-ae3b-4564-887c-3d6000ae6b48
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  gcp-auth                    gcp-auth-5f6b4f85fd-zccgt                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  ingress-nginx               ingress-nginx-controller-7967645744-xvf4n    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         2m11s
	  kube-system                 coredns-5dd5756b68-4pcrt                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m22s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 csi-hostpathplugin-qp8h8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 etcd-addons-600097                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-apiserver-addons-600097                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 kube-controller-manager-addons-600097        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  kube-system                 kube-proxy-9h94v                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 kube-scheduler-addons-600097                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  kube-system                 nvidia-device-plugin-daemonset-qctgj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 snapshot-controller-58dbcc7b99-9b2bf         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 snapshot-controller-58dbcc7b99-rt5hl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m14s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-qmvcb               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     2m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m41s (x8 over 2m41s)  kubelet          Node addons-600097 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m41s (x8 over 2m41s)  kubelet          Node addons-600097 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m41s (x7 over 2m41s)  kubelet          Node addons-600097 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m35s                  kubelet          Node addons-600097 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m35s                  kubelet          Node addons-600097 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s                  kubelet          Node addons-600097 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m34s                  kubelet          Node addons-600097 status is now: NodeReady
	  Normal  RegisteredNode           2m22s                  node-controller  Node addons-600097 event: Registered Node addons-600097 in Controller
	
	
	==> dmesg <==
	[  +0.199914] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.127487] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.259157] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +9.435585] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.056351] kauditd_printk_skb: 130 callbacks suppressed
	[  +6.615625] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.591015] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[Feb29 01:13] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.345641] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.669037] kauditd_printk_skb: 134 callbacks suppressed
	[  +9.392475] kauditd_printk_skb: 66 callbacks suppressed
	[  +8.524232] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.722632] kauditd_printk_skb: 2 callbacks suppressed
	[Feb29 01:14] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.845597] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.827795] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.208770] kauditd_printk_skb: 89 callbacks suppressed
	[ +11.161226] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.324237] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.952838] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.069756] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.110661] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.070531] kauditd_printk_skb: 32 callbacks suppressed
	[Feb29 01:15] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.700025] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [f6e26b9c70186d9761efbc13ad8f6cbbf8e52ab8ce4a433d365f57e3a6f7fefd] <==
	{"level":"warn","ts":"2024-02-29T01:13:28.784875Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.541022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10580"}
	{"level":"info","ts":"2024-02-29T01:13:28.785911Z","caller":"traceutil/trace.go:171","msg":"trace[665451296] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:914; }","duration":"188.581918ms","start":"2024-02-29T01:13:28.597317Z","end":"2024-02-29T01:13:28.785899Z","steps":["trace[665451296] 'agreement among raft nodes before linearized reading'  (duration: 187.449406ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:13:28.802803Z","caller":"traceutil/trace.go:171","msg":"trace[1699804638] transaction","detail":"{read_only:false; response_revision:915; number_of_response:1; }","duration":"174.67588ms","start":"2024-02-29T01:13:28.628112Z","end":"2024-02-29T01:13:28.802788Z","steps":["trace[1699804638] 'process raft request'  (duration: 174.309037ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:13:36.340821Z","caller":"traceutil/trace.go:171","msg":"trace[1581453323] linearizableReadLoop","detail":"{readStateIndex:964; appliedIndex:963; }","duration":"243.81431ms","start":"2024-02-29T01:13:36.096981Z","end":"2024-02-29T01:13:36.340795Z","steps":["trace[1581453323] 'read index received'  (duration: 243.701799ms)","trace[1581453323] 'applied index is now lower than readState.Index'  (duration: 112.15µs)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T01:13:36.341315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"244.314191ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10580"}
	{"level":"info","ts":"2024-02-29T01:13:36.341407Z","caller":"traceutil/trace.go:171","msg":"trace[131999217] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:937; }","duration":"244.435837ms","start":"2024-02-29T01:13:36.096962Z","end":"2024-02-29T01:13:36.341397Z","steps":["trace[131999217] 'agreement among raft nodes before linearized reading'  (duration: 244.26436ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:13:36.341641Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.353341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81411"}
	{"level":"info","ts":"2024-02-29T01:13:36.341688Z","caller":"traceutil/trace.go:171","msg":"trace[878180946] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:937; }","duration":"102.402549ms","start":"2024-02-29T01:13:36.239279Z","end":"2024-02-29T01:13:36.341681Z","steps":["trace[878180946] 'agreement among raft nodes before linearized reading'  (duration: 102.268868ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:13:41.175628Z","caller":"traceutil/trace.go:171","msg":"trace[1085054370] transaction","detail":"{read_only:false; response_revision:953; number_of_response:1; }","duration":"182.80404ms","start":"2024-02-29T01:13:40.992812Z","end":"2024-02-29T01:13:41.175616Z","steps":["trace[1085054370] 'process raft request'  (duration: 182.479515ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:13:57.592875Z","caller":"traceutil/trace.go:171","msg":"trace[695287154] transaction","detail":"{read_only:false; response_revision:979; number_of_response:1; }","duration":"254.347028ms","start":"2024-02-29T01:13:57.338514Z","end":"2024-02-29T01:13:57.592861Z","steps":["trace[695287154] 'process raft request'  (duration: 254.111446ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:14:03.806798Z","caller":"traceutil/trace.go:171","msg":"trace[956647215] transaction","detail":"{read_only:false; response_revision:988; number_of_response:1; }","duration":"184.914575ms","start":"2024-02-29T01:14:03.621834Z","end":"2024-02-29T01:14:03.806749Z","steps":["trace[956647215] 'process raft request'  (duration: 184.633085ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:14:11.171588Z","caller":"traceutil/trace.go:171","msg":"trace[820998164] transaction","detail":"{read_only:false; response_revision:1031; number_of_response:1; }","duration":"146.211444ms","start":"2024-02-29T01:14:11.025354Z","end":"2024-02-29T01:14:11.171565Z","steps":["trace[820998164] 'process raft request'  (duration: 144.604827ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:14:28.181377Z","caller":"traceutil/trace.go:171","msg":"trace[855275724] linearizableReadLoop","detail":"{readStateIndex:1177; appliedIndex:1176; }","duration":"190.20134ms","start":"2024-02-29T01:14:27.991161Z","end":"2024-02-29T01:14:28.181362Z","steps":["trace[855275724] 'read index received'  (duration: 189.348974ms)","trace[855275724] 'applied index is now lower than readState.Index'  (duration: 851.804µs)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T01:14:28.181832Z","caller":"traceutil/trace.go:171","msg":"trace[2044153138] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"214.613213ms","start":"2024-02-29T01:14:27.967133Z","end":"2024-02-29T01:14:28.181747Z","steps":["trace[2044153138] 'process raft request'  (duration: 213.092318ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:14:28.182408Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.347383ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T01:14:28.184282Z","caller":"traceutil/trace.go:171","msg":"trace[853152572] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1139; }","duration":"193.238667ms","start":"2024-02-29T01:14:27.991032Z","end":"2024-02-29T01:14:28.18427Z","steps":["trace[853152572] 'agreement among raft nodes before linearized reading'  (duration: 190.798593ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:14:28.183477Z","caller":"traceutil/trace.go:171","msg":"trace[52740345] transaction","detail":"{read_only:false; response_revision:1140; number_of_response:1; }","duration":"115.812748ms","start":"2024-02-29T01:14:28.067657Z","end":"2024-02-29T01:14:28.183469Z","steps":["trace[52740345] 'process raft request'  (duration: 115.54734ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:14:36.304567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"395.876262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T01:14:36.304642Z","caller":"traceutil/trace.go:171","msg":"trace[950095716] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1182; }","duration":"395.965553ms","start":"2024-02-29T01:14:35.908666Z","end":"2024-02-29T01:14:36.304631Z","steps":["trace[950095716] 'range keys from in-memory index tree'  (duration: 395.747083ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:14:36.304673Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T01:14:35.908651Z","time spent":"396.014626ms","remote":"127.0.0.1:60152","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-02-29T01:14:36.304778Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.127094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.181\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-02-29T01:14:36.304836Z","caller":"traceutil/trace.go:171","msg":"trace[990397485] range","detail":"{range_begin:/registry/masterleases/192.168.39.181; range_end:; response_count:1; response_revision:1182; }","duration":"213.22719ms","start":"2024-02-29T01:14:36.0916Z","end":"2024-02-29T01:14:36.304827Z","steps":["trace[990397485] 'range keys from in-memory index tree'  (duration: 212.939612ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:15:16.354991Z","caller":"traceutil/trace.go:171","msg":"trace[592789921] linearizableReadLoop","detail":"{readStateIndex:1574; appliedIndex:1573; }","duration":"182.05642ms","start":"2024-02-29T01:15:16.172911Z","end":"2024-02-29T01:15:16.354968Z","steps":["trace[592789921] 'read index received'  (duration: 181.931661ms)","trace[592789921] 'applied index is now lower than readState.Index'  (duration: 124.116µs)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T01:15:16.3552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.278647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5607"}
	{"level":"info","ts":"2024-02-29T01:15:16.355223Z","caller":"traceutil/trace.go:171","msg":"trace[748801209] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1518; }","duration":"182.329702ms","start":"2024-02-29T01:15:16.172887Z","end":"2024-02-29T01:15:16.355217Z","steps":["trace[748801209] 'agreement among raft nodes before linearized reading'  (duration: 182.233422ms)"],"step_count":1}
	
	
	==> gcp-auth [f6e52b89c446a5aca696d9d9c8e0653b2a045ee5d7a02e149e8c7e01d4dd28c7] <==
	2024/02/29 01:14:32 GCP Auth Webhook started!
	2024/02/29 01:14:38 Ready to marshal response ...
	2024/02/29 01:14:38 Ready to write response ...
	2024/02/29 01:14:38 Ready to marshal response ...
	2024/02/29 01:14:38 Ready to write response ...
	2024/02/29 01:14:48 Ready to marshal response ...
	2024/02/29 01:14:48 Ready to write response ...
	2024/02/29 01:14:49 Ready to marshal response ...
	2024/02/29 01:14:49 Ready to write response ...
	2024/02/29 01:14:50 Ready to marshal response ...
	2024/02/29 01:14:50 Ready to write response ...
	2024/02/29 01:15:03 Ready to marshal response ...
	2024/02/29 01:15:03 Ready to write response ...
	2024/02/29 01:15:10 Ready to marshal response ...
	2024/02/29 01:15:10 Ready to write response ...
	
	
	==> kernel <==
	 01:15:22 up 3 min,  0 users,  load average: 3.00, 1.69, 0.68
	Linux addons-600097 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5009adb028085947daae1ba7fef2a0f2ae731e3c2e5efeed752043cdc2f9d0ea] <==
	I0229 01:13:11.517214       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.99.123.154"}
	I0229 01:13:11.544199       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.109.223.154"}
	I0229 01:13:11.612301       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0229 01:13:11.978369       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.107.152.94"}
	I0229 01:13:11.989877       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0229 01:13:12.145143       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.97.184.127"}
	W0229 01:13:12.476701       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0229 01:13:13.171981       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 01:13:14.727678       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.109.174.115"}
	E0229 01:13:24.992301       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.206.59:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.206.59:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.206.59:443: connect: connection refused
	W0229 01:13:24.992654       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 01:13:24.993713       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 01:13:25.018437       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 01:13:25.030358       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 01:13:44.348499       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 01:14:44.347595       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 01:15:03.205459       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0229 01:15:03.389608       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.88.235"}
	I0229 01:15:05.362373       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0229 01:15:05.368792       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0229 01:15:05.784290       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	W0229 01:15:06.403040       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [4954df1875ea7603adf964a5a13be727dc7fe5a12501c20f689a1fb5d72ecb65] <==
	I0229 01:14:36.799245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="14.338068ms"
	I0229 01:14:36.800820       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="96.217µs"
	I0229 01:14:37.998798       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0229 01:14:38.027647       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 01:14:38.187644       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 01:14:44.445894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-6548d5df46" duration="4.486µs"
	I0229 01:14:45.712421       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 01:14:50.036357       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0229 01:14:50.037345       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0229 01:14:50.140970       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0229 01:14:50.153440       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0229 01:14:50.625261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="4.106µs"
	I0229 01:14:55.886963       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="8.988µs"
	I0229 01:14:58.590513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="6.938µs"
	I0229 01:15:00.713184       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 01:15:02.727160       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-69cf46c98" duration="5.776µs"
	E0229 01:15:06.405422       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 01:15:07.962414       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 01:15:07.962517       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0229 01:15:09.187789       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0229 01:15:10.653478       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 01:15:10.653507       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0229 01:15:14.887132       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0229 01:15:14.887192       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0229 01:15:15.670927       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	
	
	==> kube-proxy [04b3d1fabb914b27d0d0b03918098e702230a9210cfb28a7fac5e69d894e252a] <==
	I0229 01:13:02.098829       1 server_others.go:69] "Using iptables proxy"
	I0229 01:13:02.113542       1 node.go:141] Successfully retrieved node IP: 192.168.39.181
	I0229 01:13:02.217460       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 01:13:02.217505       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 01:13:02.222723       1 server_others.go:152] "Using iptables Proxier"
	I0229 01:13:02.222781       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 01:13:02.223140       1 server.go:846] "Version info" version="v1.28.4"
	I0229 01:13:02.223176       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 01:13:02.224555       1 config.go:188] "Starting service config controller"
	I0229 01:13:02.224595       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 01:13:02.224613       1 config.go:97] "Starting endpoint slice config controller"
	I0229 01:13:02.224617       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 01:13:02.224897       1 config.go:315] "Starting node config controller"
	I0229 01:13:02.224937       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 01:13:02.325583       1 shared_informer.go:318] Caches are synced for node config
	I0229 01:13:02.325629       1 shared_informer.go:318] Caches are synced for service config
	I0229 01:13:02.325651       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9425dce69f24a188c47d5add3884b5f287c7e60724dab40e088ac0e0b54c0708] <==
	E0229 01:12:44.571255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 01:12:44.570696       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 01:12:44.571336       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 01:12:44.570862       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 01:12:44.571417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 01:12:44.571446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 01:12:44.573371       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 01:12:44.573414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 01:12:45.394265       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 01:12:45.394357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 01:12:45.398042       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 01:12:45.398181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 01:12:45.402679       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 01:12:45.403661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 01:12:45.460352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 01:12:45.460473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 01:12:45.491939       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 01:12:45.492151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 01:12:45.496901       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 01:12:45.499327       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 01:12:45.634009       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 01:12:45.634179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 01:12:45.637523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 01:12:45.638156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0229 01:12:46.062906       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 01:15:07 addons-600097 kubelet[1217]: I0229 01:15:07.866288    1217 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c0322ab5-1e1e-484e-a540-c0ed56db9437" path="/var/lib/kubelet/pods/c0322ab5-1e1e-484e-a540-c0ed56db9437/volumes"
	Feb 29 01:15:10 addons-600097 kubelet[1217]: I0229 01:15:10.172792    1217 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=3.469072199 podCreationTimestamp="2024-02-29 01:15:03 +0000 UTC" firstStartedPulling="2024-02-29 01:15:03.849279135 +0000 UTC m=+136.192826155" lastFinishedPulling="2024-02-29 01:15:07.552958435 +0000 UTC m=+139.896505466" observedRunningTime="2024-02-29 01:15:07.89881568 +0000 UTC m=+140.242362719" watchObservedRunningTime="2024-02-29 01:15:10.17275151 +0000 UTC m=+142.516298550"
	Feb 29 01:15:10 addons-600097 kubelet[1217]: I0229 01:15:10.174186    1217 topology_manager.go:215] "Topology Admit Handler" podUID="74ced382-368b-4f97-9124-f2ba65827e5d" podNamespace="default" podName="task-pv-pod"
	Feb 29 01:15:10 addons-600097 kubelet[1217]: E0229 01:15:10.174436    1217 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0322ab5-1e1e-484e-a540-c0ed56db9437" containerName="gadget"
	Feb 29 01:15:10 addons-600097 kubelet[1217]: E0229 01:15:10.174526    1217 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0322ab5-1e1e-484e-a540-c0ed56db9437" containerName="gadget"
	Feb 29 01:15:10 addons-600097 kubelet[1217]: E0229 01:15:10.174646    1217 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e7098420-28d2-4a6b-a93d-4fefa31359b3" containerName="metrics-server"
	Feb 29 01:15:10 addons-600097 kubelet[1217]: I0229 01:15:10.174774    1217 memory_manager.go:346] "RemoveStaleState removing state" podUID="c0322ab5-1e1e-484e-a540-c0ed56db9437" containerName="gadget"
	Feb 29 01:15:10 addons-600097 kubelet[1217]: I0229 01:15:10.174846    1217 memory_manager.go:346] "RemoveStaleState removing state" podUID="e7098420-28d2-4a6b-a93d-4fefa31359b3" containerName="metrics-server"
	Feb 29 01:15:10 addons-600097 kubelet[1217]: I0229 01:15:10.174932    1217 memory_manager.go:346] "RemoveStaleState removing state" podUID="c0322ab5-1e1e-484e-a540-c0ed56db9437" containerName="gadget"
	Feb 29 01:15:10 addons-600097 kubelet[1217]: I0229 01:15:10.248040    1217 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x5vv\" (UniqueName: \"kubernetes.io/projected/74ced382-368b-4f97-9124-f2ba65827e5d-kube-api-access-8x5vv\") pod \"task-pv-pod\" (UID: \"74ced382-368b-4f97-9124-f2ba65827e5d\") " pod="default/task-pv-pod"
	Feb 29 01:15:10 addons-600097 kubelet[1217]: I0229 01:15:10.248239    1217 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-00a0353e-bd80-433e-9cff-08635d6db0ee\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^fadcc989-d69f-11ee-bfc5-228f373b5ae3\") pod \"task-pv-pod\" (UID: \"74ced382-368b-4f97-9124-f2ba65827e5d\") " pod="default/task-pv-pod"
	Feb 29 01:15:10 addons-600097 kubelet[1217]: I0229 01:15:10.248275    1217 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/74ced382-368b-4f97-9124-f2ba65827e5d-gcp-creds\") pod \"task-pv-pod\" (UID: \"74ced382-368b-4f97-9124-f2ba65827e5d\") " pod="default/task-pv-pod"
	Feb 29 01:15:10 addons-600097 kubelet[1217]: I0229 01:15:10.364287    1217 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-00a0353e-bd80-433e-9cff-08635d6db0ee\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^fadcc989-d69f-11ee-bfc5-228f373b5ae3\") pod \"task-pv-pod\" (UID: \"74ced382-368b-4f97-9124-f2ba65827e5d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/17f8e3070a881fae3135f4d47bbeadd7236cf13c0e55061435ec7863badadd00/globalmount\"" pod="default/task-pv-pod"
	Feb 29 01:15:20 addons-600097 kubelet[1217]: I0229 01:15:20.971214    1217 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/task-pv-pod" podStartSLOduration=3.879233713 podCreationTimestamp="2024-02-29 01:15:10 +0000 UTC" firstStartedPulling="2024-02-29 01:15:10.792830482 +0000 UTC m=+143.136377501" lastFinishedPulling="2024-02-29 01:15:17.884769065 +0000 UTC m=+150.228316084" observedRunningTime="2024-02-29 01:15:18.072674577 +0000 UTC m=+150.416221611" watchObservedRunningTime="2024-02-29 01:15:20.971172296 +0000 UTC m=+153.314719332"
	Feb 29 01:15:21 addons-600097 kubelet[1217]: I0229 01:15:21.040645    1217 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpsvt\" (UniqueName: \"kubernetes.io/projected/408e9673-387a-4d66-8c04-1e7f16476def-kube-api-access-vpsvt\") pod \"408e9673-387a-4d66-8c04-1e7f16476def\" (UID: \"408e9673-387a-4d66-8c04-1e7f16476def\") "
	Feb 29 01:15:21 addons-600097 kubelet[1217]: I0229 01:15:21.040703    1217 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/408e9673-387a-4d66-8c04-1e7f16476def-config-volume\") pod \"408e9673-387a-4d66-8c04-1e7f16476def\" (UID: \"408e9673-387a-4d66-8c04-1e7f16476def\") "
	Feb 29 01:15:21 addons-600097 kubelet[1217]: I0229 01:15:21.041265    1217 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/408e9673-387a-4d66-8c04-1e7f16476def-config-volume" (OuterVolumeSpecName: "config-volume") pod "408e9673-387a-4d66-8c04-1e7f16476def" (UID: "408e9673-387a-4d66-8c04-1e7f16476def"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 29 01:15:21 addons-600097 kubelet[1217]: I0229 01:15:21.057494    1217 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/408e9673-387a-4d66-8c04-1e7f16476def-kube-api-access-vpsvt" (OuterVolumeSpecName: "kube-api-access-vpsvt") pod "408e9673-387a-4d66-8c04-1e7f16476def" (UID: "408e9673-387a-4d66-8c04-1e7f16476def"). InnerVolumeSpecName "kube-api-access-vpsvt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 29 01:15:21 addons-600097 kubelet[1217]: I0229 01:15:21.088936    1217 scope.go:117] "RemoveContainer" containerID="5fac829763bcb1228e6a505269f1f56f147d1d4a039464b89262e974fc4836a1"
	Feb 29 01:15:21 addons-600097 kubelet[1217]: I0229 01:15:21.143111    1217 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vpsvt\" (UniqueName: \"kubernetes.io/projected/408e9673-387a-4d66-8c04-1e7f16476def-kube-api-access-vpsvt\") on node \"addons-600097\" DevicePath \"\""
	Feb 29 01:15:21 addons-600097 kubelet[1217]: I0229 01:15:21.143139    1217 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/408e9673-387a-4d66-8c04-1e7f16476def-config-volume\") on node \"addons-600097\" DevicePath \"\""
	Feb 29 01:15:21 addons-600097 kubelet[1217]: I0229 01:15:21.177487    1217 scope.go:117] "RemoveContainer" containerID="5fac829763bcb1228e6a505269f1f56f147d1d4a039464b89262e974fc4836a1"
	Feb 29 01:15:21 addons-600097 kubelet[1217]: E0229 01:15:21.178037    1217 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fac829763bcb1228e6a505269f1f56f147d1d4a039464b89262e974fc4836a1\": container with ID starting with 5fac829763bcb1228e6a505269f1f56f147d1d4a039464b89262e974fc4836a1 not found: ID does not exist" containerID="5fac829763bcb1228e6a505269f1f56f147d1d4a039464b89262e974fc4836a1"
	Feb 29 01:15:21 addons-600097 kubelet[1217]: I0229 01:15:21.178150    1217 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fac829763bcb1228e6a505269f1f56f147d1d4a039464b89262e974fc4836a1"} err="failed to get container status \"5fac829763bcb1228e6a505269f1f56f147d1d4a039464b89262e974fc4836a1\": rpc error: code = NotFound desc = could not find container \"5fac829763bcb1228e6a505269f1f56f147d1d4a039464b89262e974fc4836a1\": container with ID starting with 5fac829763bcb1228e6a505269f1f56f147d1d4a039464b89262e974fc4836a1 not found: ID does not exist"
	Feb 29 01:15:21 addons-600097 kubelet[1217]: I0229 01:15:21.871664    1217 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="408e9673-387a-4d66-8c04-1e7f16476def" path="/var/lib/kubelet/pods/408e9673-387a-4d66-8c04-1e7f16476def/volumes"
	
	
	==> storage-provisioner [8a2a2ea68ffb9c400b6cf9fe38eee3ecee40957dc05b127ebb08c7df6de024e1] <==
	I0229 01:13:10.702347       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 01:13:10.716643       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 01:13:10.716717       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 01:13:10.740791       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 01:13:10.747026       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-600097_0a298beb-c23d-4fb0-80b7-c71bf445f0b7!
	I0229 01:13:10.747664       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"76da4364-c960-4fa6-810a-ee2c399f8169", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-600097_0a298beb-c23d-4fb0-80b7-c71bf445f0b7 became leader
	I0229 01:13:10.851269       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-600097_0a298beb-c23d-4fb0-80b7-c71bf445f0b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-600097 -n addons-600097
helpers_test.go:261: (dbg) Run:  kubectl --context addons-600097 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-sgdfs ingress-nginx-admission-patch-rgrk5
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/NvidiaDevicePlugin]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-600097 describe pod ingress-nginx-admission-create-sgdfs ingress-nginx-admission-patch-rgrk5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-600097 describe pod ingress-nginx-admission-create-sgdfs ingress-nginx-admission-patch-rgrk5: exit status 1 (60.967954ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sgdfs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rgrk5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-600097 describe pod ingress-nginx-admission-create-sgdfs ingress-nginx-admission-patch-rgrk5: exit status 1
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (7.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-600097
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-600097: exit status 82 (2m0.283468041s)

                                                
                                                
-- stdout --
	* Stopping node "addons-600097"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-600097" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-600097
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-600097: exit status 11 (21.648566538s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.181:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-600097" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-600097
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-600097: exit status 11 (6.143628752s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.181:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-600097" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-600097
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-600097: exit status 11 (6.144011518s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.181:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-600097" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (287.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-568478 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0229 01:25:18.791563  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:25:59.752016  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:27:21.674708  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:29:09.040193  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:29:09.045532  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:29:09.055845  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:29:09.076146  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:29:09.116461  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:29:09.196783  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:29:09.357240  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:29:09.677880  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:29:10.318825  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:29:11.599357  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:29:14.160263  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:29:19.280945  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:29:29.521317  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:29:37.828010  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:29:50.001545  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-568478 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: exit status 109 (4m47.109699585s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-568478] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node ingress-addon-legacy-568478 in cluster ingress-addon-legacy-568478
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.18.20 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:25:04.909725  333512 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:25:04.910214  333512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:25:04.910245  333512 out.go:304] Setting ErrFile to fd 2...
	I0229 01:25:04.910253  333512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:25:04.910707  333512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 01:25:04.911656  333512 out.go:298] Setting JSON to false
	I0229 01:25:04.912677  333512 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4048,"bootTime":1709165857,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:25:04.912746  333512 start.go:139] virtualization: kvm guest
	I0229 01:25:04.914532  333512 out.go:177] * [ingress-addon-legacy-568478] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:25:04.916152  333512 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:25:04.916206  333512 notify.go:220] Checking for updates...
	I0229 01:25:04.917439  333512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:25:04.918794  333512 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:25:04.920160  333512 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:25:04.921426  333512 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:25:04.922656  333512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:25:04.924027  333512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:25:04.957333  333512 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 01:25:04.958484  333512 start.go:299] selected driver: kvm2
	I0229 01:25:04.958501  333512 start.go:903] validating driver "kvm2" against <nil>
	I0229 01:25:04.958527  333512 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:25:04.959235  333512 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:25:04.959335  333512 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:25:04.973972  333512 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:25:04.974031  333512 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:25:04.974234  333512 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 01:25:04.974309  333512 cni.go:84] Creating CNI manager for ""
	I0229 01:25:04.974323  333512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 01:25:04.974331  333512 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 01:25:04.974342  333512 start_flags.go:323] config:
	{Name:ingress-addon-legacy-568478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-568478 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:25:04.974477  333512 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:25:04.976033  333512 out.go:177] * Starting control plane node ingress-addon-legacy-568478 in cluster ingress-addon-legacy-568478
	I0229 01:25:04.977155  333512 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0229 01:25:05.082758  333512 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0229 01:25:05.082790  333512 cache.go:56] Caching tarball of preloaded images
	I0229 01:25:05.082931  333512 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0229 01:25:05.084666  333512 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0229 01:25:05.085871  333512 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0229 01:25:05.194534  333512 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0229 01:25:22.631352  333512 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0229 01:25:22.631460  333512 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0229 01:25:23.569020  333512 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0229 01:25:23.569439  333512 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/config.json ...
	I0229 01:25:23.569485  333512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/config.json: {Name:mkcc0e54b2cd198146564a592fcc455f2c61964f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:25:23.569689  333512 start.go:365] acquiring machines lock for ingress-addon-legacy-568478: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:25:23.569731  333512 start.go:369] acquired machines lock for "ingress-addon-legacy-568478" in 19.145µs
	I0229 01:25:23.569755  333512 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-568478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-568478 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 01:25:23.569887  333512 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 01:25:23.571883  333512 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0229 01:25:23.572077  333512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:25:23.572145  333512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:25:23.587367  333512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35643
	I0229 01:25:23.587863  333512 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:25:23.588406  333512 main.go:141] libmachine: Using API Version  1
	I0229 01:25:23.588437  333512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:25:23.588782  333512 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:25:23.588992  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetMachineName
	I0229 01:25:23.589166  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .DriverName
	I0229 01:25:23.589349  333512 start.go:159] libmachine.API.Create for "ingress-addon-legacy-568478" (driver="kvm2")
	I0229 01:25:23.589379  333512 client.go:168] LocalClient.Create starting
	I0229 01:25:23.589409  333512 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem
	I0229 01:25:23.589446  333512 main.go:141] libmachine: Decoding PEM data...
	I0229 01:25:23.589462  333512 main.go:141] libmachine: Parsing certificate...
	I0229 01:25:23.589528  333512 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem
	I0229 01:25:23.589556  333512 main.go:141] libmachine: Decoding PEM data...
	I0229 01:25:23.589571  333512 main.go:141] libmachine: Parsing certificate...
	I0229 01:25:23.589586  333512 main.go:141] libmachine: Running pre-create checks...
	I0229 01:25:23.589595  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .PreCreateCheck
	I0229 01:25:23.589968  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetConfigRaw
	I0229 01:25:23.590415  333512 main.go:141] libmachine: Creating machine...
	I0229 01:25:23.590437  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .Create
	I0229 01:25:23.590619  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Creating KVM machine...
	I0229 01:25:23.591931  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found existing default KVM network
	I0229 01:25:23.592588  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:23.592461  333568 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015aa0}
	I0229 01:25:23.597666  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | trying to create private KVM network mk-ingress-addon-legacy-568478 192.168.39.0/24...
	I0229 01:25:23.661380  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | private KVM network mk-ingress-addon-legacy-568478 192.168.39.0/24 created
	I0229 01:25:23.661412  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Setting up store path in /home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478 ...
	I0229 01:25:23.661430  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:23.661334  333568 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:25:23.661443  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Building disk image from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 01:25:23.661534  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Downloading /home/jenkins/minikube-integration/18063-316644/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 01:25:23.914632  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:23.914509  333568 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/id_rsa...
	I0229 01:25:23.976892  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:23.976765  333568 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/ingress-addon-legacy-568478.rawdisk...
	I0229 01:25:23.976926  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Writing magic tar header
	I0229 01:25:23.976945  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Writing SSH key tar header
	I0229 01:25:23.976958  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:23.976893  333568 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478 ...
	I0229 01:25:23.977039  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478
	I0229 01:25:23.977068  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines
	I0229 01:25:23.977084  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478 (perms=drwx------)
	I0229 01:25:23.977103  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines (perms=drwxr-xr-x)
	I0229 01:25:23.977113  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube (perms=drwxr-xr-x)
	I0229 01:25:23.977123  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644 (perms=drwxrwxr-x)
	I0229 01:25:23.977136  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 01:25:23.977147  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:25:23.977161  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 01:25:23.977171  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644
	I0229 01:25:23.977184  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 01:25:23.977192  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Checking permissions on dir: /home/jenkins
	I0229 01:25:23.977198  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Creating domain...
	I0229 01:25:23.977210  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Checking permissions on dir: /home
	I0229 01:25:23.977218  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Skipping /home - not owner
	I0229 01:25:23.978180  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) define libvirt domain using xml: 
	I0229 01:25:23.978213  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) <domain type='kvm'>
	I0229 01:25:23.978240  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)   <name>ingress-addon-legacy-568478</name>
	I0229 01:25:23.978250  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)   <memory unit='MiB'>4096</memory>
	I0229 01:25:23.978268  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)   <vcpu>2</vcpu>
	I0229 01:25:23.978285  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)   <features>
	I0229 01:25:23.978294  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <acpi/>
	I0229 01:25:23.978300  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <apic/>
	I0229 01:25:23.978309  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <pae/>
	I0229 01:25:23.978316  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     
	I0229 01:25:23.978325  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)   </features>
	I0229 01:25:23.978336  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)   <cpu mode='host-passthrough'>
	I0229 01:25:23.978345  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)   
	I0229 01:25:23.978354  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)   </cpu>
	I0229 01:25:23.978366  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)   <os>
	I0229 01:25:23.978378  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <type>hvm</type>
	I0229 01:25:23.978398  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <boot dev='cdrom'/>
	I0229 01:25:23.978416  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <boot dev='hd'/>
	I0229 01:25:23.978425  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <bootmenu enable='no'/>
	I0229 01:25:23.978439  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)   </os>
	I0229 01:25:23.978476  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)   <devices>
	I0229 01:25:23.978510  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <disk type='file' device='cdrom'>
	I0229 01:25:23.978547  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/boot2docker.iso'/>
	I0229 01:25:23.978571  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)       <target dev='hdc' bus='scsi'/>
	I0229 01:25:23.978586  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)       <readonly/>
	I0229 01:25:23.978596  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     </disk>
	I0229 01:25:23.978610  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <disk type='file' device='disk'>
	I0229 01:25:23.978623  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 01:25:23.978637  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/ingress-addon-legacy-568478.rawdisk'/>
	I0229 01:25:23.978651  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)       <target dev='hda' bus='virtio'/>
	I0229 01:25:23.978671  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     </disk>
	I0229 01:25:23.978683  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <interface type='network'>
	I0229 01:25:23.978697  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)       <source network='mk-ingress-addon-legacy-568478'/>
	I0229 01:25:23.978708  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)       <model type='virtio'/>
	I0229 01:25:23.978728  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     </interface>
	I0229 01:25:23.978746  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <interface type='network'>
	I0229 01:25:23.978763  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)       <source network='default'/>
	I0229 01:25:23.978781  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)       <model type='virtio'/>
	I0229 01:25:23.978797  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     </interface>
	I0229 01:25:23.978814  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <serial type='pty'>
	I0229 01:25:23.978820  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)       <target port='0'/>
	I0229 01:25:23.978826  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     </serial>
	I0229 01:25:23.978836  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <console type='pty'>
	I0229 01:25:23.978845  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)       <target type='serial' port='0'/>
	I0229 01:25:23.978854  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     </console>
	I0229 01:25:23.978866  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     <rng model='virtio'>
	I0229 01:25:23.978880  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)       <backend model='random'>/dev/random</backend>
	I0229 01:25:23.978891  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     </rng>
	I0229 01:25:23.978902  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     
	I0229 01:25:23.978909  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)     
	I0229 01:25:23.978917  333512 main.go:141] libmachine: (ingress-addon-legacy-568478)   </devices>
	I0229 01:25:23.978929  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) </domain>
	I0229 01:25:23.978941  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) 
	I0229 01:25:23.983040  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:d3:ec:d1 in network default
	I0229 01:25:23.983607  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Ensuring networks are active...
	I0229 01:25:23.983630  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:23.984244  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Ensuring network default is active
	I0229 01:25:23.984515  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Ensuring network mk-ingress-addon-legacy-568478 is active
	I0229 01:25:23.985007  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Getting domain xml...
	I0229 01:25:23.985620  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Creating domain...
	I0229 01:25:25.170493  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Waiting to get IP...
	I0229 01:25:25.171325  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:25.171769  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:25.171786  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:25.171748  333568 retry.go:31] will retry after 209.92394ms: waiting for machine to come up
	I0229 01:25:25.383420  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:25.383825  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:25.383885  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:25.383800  333568 retry.go:31] will retry after 307.646216ms: waiting for machine to come up
	I0229 01:25:25.693289  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:25.693744  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:25.693781  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:25.693687  333568 retry.go:31] will retry after 313.349634ms: waiting for machine to come up
	I0229 01:25:26.008064  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:26.008481  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:26.008528  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:26.008386  333568 retry.go:31] will retry after 595.789377ms: waiting for machine to come up
	I0229 01:25:26.606232  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:26.606641  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:26.606664  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:26.606601  333568 retry.go:31] will retry after 468.776633ms: waiting for machine to come up
	I0229 01:25:27.077356  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:27.077760  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:27.077791  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:27.077695  333568 retry.go:31] will retry after 857.232785ms: waiting for machine to come up
	I0229 01:25:27.936839  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:27.937248  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:27.937277  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:27.937232  333568 retry.go:31] will retry after 1.049991806s: waiting for machine to come up
	I0229 01:25:28.988762  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:28.989153  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:28.989183  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:28.989118  333568 retry.go:31] will retry after 976.590511ms: waiting for machine to come up
	I0229 01:25:29.967417  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:29.967839  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:29.967869  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:29.967787  333568 retry.go:31] will retry after 1.279651059s: waiting for machine to come up
	I0229 01:25:31.249256  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:31.249627  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:31.249655  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:31.249583  333568 retry.go:31] will retry after 1.905301276s: waiting for machine to come up
	I0229 01:25:33.157619  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:33.157999  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:33.158032  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:33.157934  333568 retry.go:31] will retry after 2.797940131s: waiting for machine to come up
	I0229 01:25:35.959194  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:35.959557  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:35.959593  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:35.959488  333568 retry.go:31] will retry after 3.334329298s: waiting for machine to come up
	I0229 01:25:39.295253  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:39.295514  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:39.295537  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:39.295487  333568 retry.go:31] will retry after 3.00294596s: waiting for machine to come up
	I0229 01:25:42.301634  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:42.302039  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find current IP address of domain ingress-addon-legacy-568478 in network mk-ingress-addon-legacy-568478
	I0229 01:25:42.302068  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | I0229 01:25:42.301983  333568 retry.go:31] will retry after 4.6660463s: waiting for machine to come up
	I0229 01:25:46.972136  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:46.972582  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Found IP for machine: 192.168.39.209
	I0229 01:25:46.972605  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Reserving static IP address...
	I0229 01:25:46.972622  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has current primary IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:46.973017  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-568478", mac: "52:54:00:38:89:37", ip: "192.168.39.209"} in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.044634  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Getting to WaitForSSH function...
	I0229 01:25:47.044671  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Reserved static IP address: 192.168.39.209
	I0229 01:25:47.044735  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Waiting for SSH to be available...
	I0229 01:25:47.047051  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.047372  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:minikube Clientid:01:52:54:00:38:89:37}
	I0229 01:25:47.047406  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.047531  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Using SSH client type: external
	I0229 01:25:47.047548  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/id_rsa (-rw-------)
	I0229 01:25:47.047588  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 01:25:47.047610  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | About to run SSH command:
	I0229 01:25:47.047620  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | exit 0
	I0229 01:25:47.174342  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | SSH cmd err, output: <nil>: 
	I0229 01:25:47.174653  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) KVM machine creation complete!
	I0229 01:25:47.175017  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetConfigRaw
	I0229 01:25:47.175614  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .DriverName
	I0229 01:25:47.175796  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .DriverName
	I0229 01:25:47.175945  333512 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 01:25:47.175962  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetState
	I0229 01:25:47.176973  333512 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 01:25:47.176990  333512 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 01:25:47.177010  333512 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 01:25:47.177023  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:25:47.179055  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.179369  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:47.179418  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.179542  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:25:47.179748  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:47.179925  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:47.180084  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:25:47.180285  333512 main.go:141] libmachine: Using SSH client type: native
	I0229 01:25:47.180528  333512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0229 01:25:47.180541  333512 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 01:25:47.285495  333512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:25:47.285523  333512 main.go:141] libmachine: Detecting the provisioner...
	I0229 01:25:47.285536  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:25:47.288144  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.288473  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:47.288506  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.288671  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:25:47.288877  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:47.289040  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:47.289191  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:25:47.289385  333512 main.go:141] libmachine: Using SSH client type: native
	I0229 01:25:47.289563  333512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0229 01:25:47.289579  333512 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 01:25:47.397955  333512 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 01:25:47.398038  333512 main.go:141] libmachine: found compatible host: buildroot
	I0229 01:25:47.398052  333512 main.go:141] libmachine: Provisioning with buildroot...
	I0229 01:25:47.398066  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetMachineName
	I0229 01:25:47.398365  333512 buildroot.go:166] provisioning hostname "ingress-addon-legacy-568478"
	I0229 01:25:47.398399  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetMachineName
	I0229 01:25:47.398602  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:25:47.401084  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.401436  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:47.401471  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.401613  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:25:47.401780  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:47.401907  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:47.402016  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:25:47.402164  333512 main.go:141] libmachine: Using SSH client type: native
	I0229 01:25:47.402386  333512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0229 01:25:47.402408  333512 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-568478 && echo "ingress-addon-legacy-568478" | sudo tee /etc/hostname
	I0229 01:25:47.526590  333512 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-568478
	
	I0229 01:25:47.526616  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:25:47.529524  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.529881  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:47.529914  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.530055  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:25:47.530273  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:47.530456  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:47.530547  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:25:47.530681  333512 main.go:141] libmachine: Using SSH client type: native
	I0229 01:25:47.530930  333512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0229 01:25:47.530952  333512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-568478' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-568478/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-568478' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:25:47.648714  333512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:25:47.648756  333512 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 01:25:47.648808  333512 buildroot.go:174] setting up certificates
	I0229 01:25:47.648825  333512 provision.go:83] configureAuth start
	I0229 01:25:47.648843  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetMachineName
	I0229 01:25:47.649132  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetIP
	I0229 01:25:47.651666  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.652084  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:47.652114  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.652226  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:25:47.654433  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.654776  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:47.654808  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.654928  333512 provision.go:138] copyHostCerts
	I0229 01:25:47.654964  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 01:25:47.654997  333512 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 01:25:47.655019  333512 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 01:25:47.655083  333512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 01:25:47.655156  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 01:25:47.655177  333512 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 01:25:47.655183  333512 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 01:25:47.655207  333512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 01:25:47.655250  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 01:25:47.655265  333512 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 01:25:47.655271  333512 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 01:25:47.655291  333512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 01:25:47.655339  333512 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-568478 san=[192.168.39.209 192.168.39.209 localhost 127.0.0.1 minikube ingress-addon-legacy-568478]
	I0229 01:25:47.876485  333512 provision.go:172] copyRemoteCerts
	I0229 01:25:47.876550  333512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:25:47.876576  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:25:47.880496  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.880931  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:47.880963  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:47.881125  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:25:47.881335  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:47.881502  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:25:47.881659  333512 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/id_rsa Username:docker}
	I0229 01:25:47.964654  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 01:25:47.964716  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 01:25:47.990895  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 01:25:47.990943  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0229 01:25:48.016590  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 01:25:48.016659  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 01:25:48.042816  333512 provision.go:86] duration metric: configureAuth took 393.973829ms
	I0229 01:25:48.042838  333512 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:25:48.043018  333512 config.go:182] Loaded profile config "ingress-addon-legacy-568478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 01:25:48.043103  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:25:48.045739  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.046073  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:48.046105  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.046201  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:25:48.046414  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:48.046609  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:48.046766  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:25:48.046986  333512 main.go:141] libmachine: Using SSH client type: native
	I0229 01:25:48.047159  333512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0229 01:25:48.047174  333512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 01:25:48.325091  333512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 01:25:48.325120  333512 main.go:141] libmachine: Checking connection to Docker...
	I0229 01:25:48.325130  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetURL
	I0229 01:25:48.326441  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Using libvirt version 6000000
	I0229 01:25:48.328410  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.328746  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:48.328776  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.328931  333512 main.go:141] libmachine: Docker is up and running!
	I0229 01:25:48.328943  333512 main.go:141] libmachine: Reticulating splines...
	I0229 01:25:48.328950  333512 client.go:171] LocalClient.Create took 24.739562286s
	I0229 01:25:48.328985  333512 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-568478" took 24.739636289s
	I0229 01:25:48.328999  333512 start.go:300] post-start starting for "ingress-addon-legacy-568478" (driver="kvm2")
	I0229 01:25:48.329014  333512 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:25:48.329036  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .DriverName
	I0229 01:25:48.329306  333512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:25:48.329345  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:25:48.331533  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.331873  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:48.331903  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.332031  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:25:48.332224  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:48.332406  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:25:48.332579  333512 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/id_rsa Username:docker}
	I0229 01:25:48.417314  333512 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:25:48.421953  333512 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:25:48.421981  333512 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 01:25:48.422056  333512 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 01:25:48.422153  333512 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 01:25:48.422165  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> /etc/ssl/certs/3238852.pem
	I0229 01:25:48.422298  333512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 01:25:48.432065  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 01:25:48.458653  333512 start.go:303] post-start completed in 129.635731ms
	I0229 01:25:48.458704  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetConfigRaw
	I0229 01:25:48.459316  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetIP
	I0229 01:25:48.461839  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.462184  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:48.462212  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.462561  333512 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/config.json ...
	I0229 01:25:48.462779  333512 start.go:128] duration metric: createHost completed in 24.89287897s
	I0229 01:25:48.462838  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:25:48.465101  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.465476  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:48.465499  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.465633  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:25:48.465815  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:48.465972  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:48.466096  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:25:48.466273  333512 main.go:141] libmachine: Using SSH client type: native
	I0229 01:25:48.466475  333512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0229 01:25:48.466488  333512 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 01:25:48.575345  333512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709169948.558259947
	
	I0229 01:25:48.575368  333512 fix.go:206] guest clock: 1709169948.558259947
	I0229 01:25:48.575375  333512 fix.go:219] Guest: 2024-02-29 01:25:48.558259947 +0000 UTC Remote: 2024-02-29 01:25:48.462819041 +0000 UTC m=+43.599336045 (delta=95.440906ms)
	I0229 01:25:48.575411  333512 fix.go:190] guest clock delta is within tolerance: 95.440906ms
	I0229 01:25:48.575416  333512 start.go:83] releasing machines lock for "ingress-addon-legacy-568478", held for 25.005674237s
	I0229 01:25:48.575436  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .DriverName
	I0229 01:25:48.575725  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetIP
	I0229 01:25:48.578062  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.578402  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:48.578427  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.578532  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .DriverName
	I0229 01:25:48.579047  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .DriverName
	I0229 01:25:48.579233  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .DriverName
	I0229 01:25:48.579328  333512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:25:48.579390  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:25:48.579481  333512 ssh_runner.go:195] Run: cat /version.json
	I0229 01:25:48.579500  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:25:48.581892  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.582155  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.582270  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:48.582298  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.582442  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:25:48.582585  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:48.582594  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:48.582611  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:48.582751  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:25:48.582769  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:25:48.582956  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:25:48.582951  333512 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/id_rsa Username:docker}
	I0229 01:25:48.583107  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:25:48.583275  333512 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/id_rsa Username:docker}
	I0229 01:25:48.685487  333512 ssh_runner.go:195] Run: systemctl --version
	I0229 01:25:48.691812  333512 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 01:25:48.856363  333512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 01:25:48.862875  333512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:25:48.862942  333512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 01:25:48.880609  333512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 01:25:48.880639  333512 start.go:475] detecting cgroup driver to use...
	I0229 01:25:48.880714  333512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:25:48.899707  333512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:25:48.915970  333512 docker.go:217] disabling cri-docker service (if available) ...
	I0229 01:25:48.916047  333512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 01:25:48.931414  333512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 01:25:48.946051  333512 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 01:25:49.071666  333512 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 01:25:49.241326  333512 docker.go:233] disabling docker service ...
	I0229 01:25:49.241389  333512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 01:25:49.257520  333512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 01:25:49.271927  333512 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 01:25:49.409111  333512 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 01:25:49.535112  333512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 01:25:49.550509  333512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:25:49.570287  333512 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0229 01:25:49.570352  333512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:25:49.581408  333512 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 01:25:49.581460  333512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:25:49.592455  333512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:25:49.603602  333512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:25:49.614945  333512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:25:49.626251  333512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:25:49.636262  333512 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 01:25:49.636322  333512 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 01:25:49.651174  333512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:25:49.661336  333512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:25:49.783416  333512 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 01:25:49.926804  333512 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 01:25:49.926879  333512 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 01:25:49.932568  333512 start.go:543] Will wait 60s for crictl version
	I0229 01:25:49.932632  333512 ssh_runner.go:195] Run: which crictl
	I0229 01:25:49.937197  333512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 01:25:49.976524  333512 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 01:25:49.976607  333512 ssh_runner.go:195] Run: crio --version
	I0229 01:25:50.009222  333512 ssh_runner.go:195] Run: crio --version
	I0229 01:25:50.041740  333512 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.29.1 ...
	I0229 01:25:50.042885  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetIP
	I0229 01:25:50.045524  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:50.045804  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:25:50.045824  333512 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:25:50.046045  333512 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 01:25:50.050759  333512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:25:50.064455  333512 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0229 01:25:50.064523  333512 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 01:25:50.097231  333512 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0229 01:25:50.097309  333512 ssh_runner.go:195] Run: which lz4
	I0229 01:25:50.101852  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 01:25:50.101967  333512 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 01:25:50.106653  333512 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 01:25:50.106682  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0229 01:25:51.969051  333512 crio.go:444] Took 1.867105 seconds to copy over tarball
	I0229 01:25:51.969122  333512 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 01:25:54.998987  333512 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.02976738s)
	I0229 01:25:54.999032  333512 crio.go:451] Took 3.029953 seconds to extract the tarball
	I0229 01:25:54.999046  333512 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 01:25:55.046554  333512 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 01:25:55.097801  333512 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0229 01:25:55.097833  333512 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 01:25:55.097921  333512 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:25:55.097948  333512 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:25:55.097961  333512 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0229 01:25:55.097993  333512 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0229 01:25:55.098039  333512 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0229 01:25:55.098074  333512 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 01:25:55.098049  333512 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:25:55.098072  333512 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:25:55.099370  333512 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:25:55.099395  333512 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:25:55.099430  333512 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0229 01:25:55.099371  333512 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 01:25:55.099374  333512 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0229 01:25:55.099376  333512 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0229 01:25:55.099380  333512 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:25:55.099376  333512 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:25:55.233921  333512 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0229 01:25:55.262116  333512 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0229 01:25:55.285432  333512 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0229 01:25:55.285471  333512 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 01:25:55.285516  333512 ssh_runner.go:195] Run: which crictl
	I0229 01:25:55.295868  333512 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:25:55.330093  333512 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0229 01:25:55.330235  333512 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0229 01:25:55.330266  333512 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0229 01:25:55.330295  333512 ssh_runner.go:195] Run: which crictl
	I0229 01:25:55.364386  333512 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0229 01:25:55.364436  333512 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:25:55.364480  333512 ssh_runner.go:195] Run: which crictl
	I0229 01:25:55.384362  333512 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0229 01:25:55.384391  333512 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0229 01:25:55.384414  333512 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0229 01:25:55.389260  333512 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:25:55.400239  333512 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0229 01:25:55.403249  333512 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0229 01:25:55.458320  333512 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:25:55.476266  333512 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0229 01:25:55.476351  333512 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0229 01:25:55.517312  333512 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0229 01:25:55.517369  333512 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:25:55.517421  333512 ssh_runner.go:195] Run: which crictl
	I0229 01:25:55.545961  333512 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0229 01:25:55.546003  333512 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0229 01:25:55.546048  333512 ssh_runner.go:195] Run: which crictl
	I0229 01:25:55.549193  333512 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0229 01:25:55.549244  333512 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0229 01:25:55.549302  333512 ssh_runner.go:195] Run: which crictl
	I0229 01:25:55.566321  333512 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0229 01:25:55.566368  333512 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:25:55.566407  333512 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0229 01:25:55.566461  333512 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0229 01:25:55.566412  333512 ssh_runner.go:195] Run: which crictl
	I0229 01:25:55.566506  333512 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0229 01:25:55.641444  333512 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0229 01:25:55.644222  333512 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 01:25:55.644247  333512 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0229 01:25:55.644467  333512 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0229 01:25:55.682694  333512 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0229 01:25:56.025456  333512 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:25:56.177863  333512 cache_images.go:92] LoadImages completed in 1.080006308s
	W0229 01:25:56.178079  333512 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0229 01:25:56.178208  333512 ssh_runner.go:195] Run: crio config
	I0229 01:25:56.233421  333512 cni.go:84] Creating CNI manager for ""
	I0229 01:25:56.233445  333512 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 01:25:56.233464  333512 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 01:25:56.233483  333512 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.209 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-568478 NodeName:ingress-addon-legacy-568478 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 01:25:56.233647  333512 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-568478"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:25:56.233724  333512 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-568478 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-568478 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:25:56.233776  333512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0229 01:25:56.244965  333512 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:25:56.245056  333512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 01:25:56.255756  333512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0229 01:25:56.274551  333512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0229 01:25:56.292951  333512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I0229 01:25:56.312050  333512 ssh_runner.go:195] Run: grep 192.168.39.209	control-plane.minikube.internal$ /etc/hosts
	I0229 01:25:56.316448  333512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:25:56.330048  333512 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478 for IP: 192.168.39.209
	I0229 01:25:56.330097  333512 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:25:56.330305  333512 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 01:25:56.330356  333512 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 01:25:56.330409  333512 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/client.key
	I0229 01:25:56.330423  333512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/client.crt with IP's: []
	I0229 01:25:56.579939  333512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/client.crt ...
	I0229 01:25:56.579974  333512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/client.crt: {Name:mke193e6f608e65a9afde4424f9aa4f0f779b228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:25:56.580142  333512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/client.key ...
	I0229 01:25:56.580157  333512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/client.key: {Name:mk96dd6d86284faef74e6a5f9327d67145e00610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:25:56.580237  333512 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.key.c475731a
	I0229 01:25:56.580253  333512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.crt.c475731a with IP's: [192.168.39.209 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 01:25:56.729566  333512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.crt.c475731a ...
	I0229 01:25:56.729599  333512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.crt.c475731a: {Name:mka663f5fc2b58ad977da432f9a2199286b16565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:25:56.729754  333512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.key.c475731a ...
	I0229 01:25:56.729769  333512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.key.c475731a: {Name:mkd2b4da1a155029a5ea63bbf6168f3df2cbe5e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:25:56.729835  333512 certs.go:337] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.crt.c475731a -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.crt
	I0229 01:25:56.729939  333512 certs.go:341] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.key.c475731a -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.key
	I0229 01:25:56.729995  333512 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/proxy-client.key
	I0229 01:25:56.730010  333512 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/proxy-client.crt with IP's: []
	I0229 01:25:56.803197  333512 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/proxy-client.crt ...
	I0229 01:25:56.803229  333512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/proxy-client.crt: {Name:mk6bef7ebf38e83d994a198a0a3aa989e7648b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:25:56.803402  333512 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/proxy-client.key ...
	I0229 01:25:56.803417  333512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/proxy-client.key: {Name:mk093297e060bc26e2e5daab922c0fd66e2d339d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:25:56.803488  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 01:25:56.803505  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 01:25:56.803518  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 01:25:56.803530  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 01:25:56.803543  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 01:25:56.803559  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 01:25:56.803571  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 01:25:56.803583  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 01:25:56.803684  333512 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 01:25:56.803731  333512 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 01:25:56.803741  333512 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 01:25:56.803766  333512 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 01:25:56.803788  333512 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:25:56.803807  333512 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 01:25:56.803845  333512 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 01:25:56.803889  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:25:56.803906  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem -> /usr/share/ca-certificates/323885.pem
	I0229 01:25:56.803919  333512 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> /usr/share/ca-certificates/3238852.pem
	I0229 01:25:56.804628  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 01:25:56.833179  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 01:25:56.859429  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 01:25:56.885948  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/ingress-addon-legacy-568478/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 01:25:56.912993  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:25:56.940553  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 01:25:56.967044  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:25:56.993933  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 01:25:57.020931  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:25:57.047056  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 01:25:57.073361  333512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 01:25:57.099795  333512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 01:25:57.118587  333512 ssh_runner.go:195] Run: openssl version
	I0229 01:25:57.124894  333512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 01:25:57.136650  333512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 01:25:57.142017  333512 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 01:25:57.142089  333512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 01:25:57.148475  333512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 01:25:57.159888  333512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:25:57.171456  333512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:25:57.176900  333512 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:25:57.176953  333512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:25:57.183510  333512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:25:57.195703  333512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 01:25:57.207827  333512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 01:25:57.212760  333512 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 01:25:57.212811  333512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 01:25:57.219129  333512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 01:25:57.230662  333512 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:25:57.235100  333512 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 01:25:57.235147  333512 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-568478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-568478 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:25:57.235230  333512 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 01:25:57.235264  333512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 01:25:57.283883  333512 cri.go:89] found id: ""
	I0229 01:25:57.283993  333512 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 01:25:57.294856  333512 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:25:57.304861  333512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:25:57.314935  333512 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:25:57.314981  333512 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:25:57.375571  333512 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 01:25:57.376499  333512 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:25:57.508612  333512 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:25:57.508717  333512 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:25:57.508854  333512 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:25:57.735028  333512 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:25:57.736002  333512 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:25:57.736085  333512 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 01:25:57.895287  333512 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:25:57.897941  333512 out.go:204]   - Generating certificates and keys ...
	I0229 01:25:57.898049  333512 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:25:57.898213  333512 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:25:58.105640  333512 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 01:25:58.214676  333512 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 01:25:58.365304  333512 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 01:25:58.494644  333512 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 01:25:58.692814  333512 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 01:25:58.693067  333512 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-568478 localhost] and IPs [192.168.39.209 127.0.0.1 ::1]
	I0229 01:25:58.805927  333512 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 01:25:58.806107  333512 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-568478 localhost] and IPs [192.168.39.209 127.0.0.1 ::1]
	I0229 01:25:58.901067  333512 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 01:25:59.058458  333512 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 01:25:59.154583  333512 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 01:25:59.154863  333512 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:25:59.338688  333512 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:25:59.435824  333512 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:25:59.643971  333512 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:25:59.741189  333512 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:25:59.743463  333512 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:25:59.745214  333512 out.go:204]   - Booting up control plane ...
	I0229 01:25:59.745333  333512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:25:59.750757  333512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:25:59.751929  333512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:25:59.752891  333512 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:25:59.755792  333512 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:26:39.753633  333512 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:26:39.754156  333512 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:26:39.754361  333512 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:26:44.755047  333512 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:26:44.755267  333512 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:26:54.755586  333512 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:26:54.755824  333512 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:27:14.756962  333512 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:27:14.757168  333512 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:27:54.757110  333512 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:27:54.757359  333512 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:27:54.757371  333512 kubeadm.go:322] 
	I0229 01:27:54.757445  333512 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 01:27:54.757539  333512 kubeadm.go:322] 		timed out waiting for the condition
	I0229 01:27:54.757560  333512 kubeadm.go:322] 
	I0229 01:27:54.757609  333512 kubeadm.go:322] 	This error is likely caused by:
	I0229 01:27:54.757659  333512 kubeadm.go:322] 		- The kubelet is not running
	I0229 01:27:54.757785  333512 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:27:54.757796  333512 kubeadm.go:322] 
	I0229 01:27:54.757933  333512 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:27:54.757982  333512 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 01:27:54.758030  333512 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 01:27:54.758039  333512 kubeadm.go:322] 
	I0229 01:27:54.758191  333512 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:27:54.758330  333512 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 01:27:54.758347  333512 kubeadm.go:322] 
	I0229 01:27:54.758459  333512 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0229 01:27:54.758533  333512 kubeadm.go:322] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0229 01:27:54.758649  333512 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 01:27:54.758768  333512 kubeadm.go:322] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0229 01:27:54.758780  333512 kubeadm.go:322] 
	I0229 01:27:54.758956  333512 kubeadm.go:322] W0229 01:25:57.368820     917 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 01:27:54.759092  333512 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:27:54.759234  333512 kubeadm.go:322] W0229 01:25:59.745617     917 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:27:54.759375  333512 kubeadm.go:322] W0229 01:25:59.746858     917 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:27:54.759491  333512 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:27:54.759592  333512 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 01:27:54.759837  333512 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-568478 localhost] and IPs [192.168.39.209 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-568478 localhost] and IPs [192.168.39.209 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:25:57.368820     917 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:25:59.745617     917 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:25:59.746858     917 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-568478 localhost] and IPs [192.168.39.209 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-568478 localhost] and IPs [192.168.39.209 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:25:57.368820     917 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:25:59.745617     917 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:25:59.746858     917 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 01:27:54.759908  333512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 01:27:55.228441  333512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:27:55.244508  333512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:27:55.255772  333512 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:27:55.255821  333512 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:27:55.306117  333512 kubeadm.go:322] W0229 01:27:55.303578    2351 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 01:27:55.435896  333512 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:27:56.327043  333512 kubeadm.go:322] W0229 01:27:56.324710    2351 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:27:56.328003  333512 kubeadm.go:322] W0229 01:27:56.325768    2351 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:29:51.337382  333512 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:29:51.337527  333512 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:29:51.339027  333512 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 01:29:51.339079  333512 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:29:51.339174  333512 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:29:51.339268  333512 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:29:51.339375  333512 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:29:51.339516  333512 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:29:51.339647  333512 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:29:51.339713  333512 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 01:29:51.339795  333512 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:29:51.341921  333512 out.go:204]   - Generating certificates and keys ...
	I0229 01:29:51.342012  333512 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:29:51.342062  333512 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:29:51.342123  333512 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:29:51.342174  333512 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 01:29:51.342271  333512 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:29:51.342336  333512 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 01:29:51.342403  333512 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 01:29:51.342500  333512 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:29:51.342617  333512 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:29:51.342708  333512 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:29:51.342761  333512 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 01:29:51.342817  333512 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:29:51.342876  333512 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:29:51.342957  333512 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:29:51.343045  333512 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:29:51.343127  333512 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:29:51.343226  333512 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:29:51.345105  333512 out.go:204]   - Booting up control plane ...
	I0229 01:29:51.345211  333512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:29:51.345286  333512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:29:51.345341  333512 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:29:51.345409  333512 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:29:51.345562  333512 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:29:51.345612  333512 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:29:51.345678  333512 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:29:51.345894  333512 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:29:51.345996  333512 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:29:51.346178  333512 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:29:51.346264  333512 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:29:51.346412  333512 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:29:51.346470  333512 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:29:51.346632  333512 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:29:51.346697  333512 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:29:51.346849  333512 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:29:51.346856  333512 kubeadm.go:322] 
	I0229 01:29:51.346892  333512 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 01:29:51.346956  333512 kubeadm.go:322] 		timed out waiting for the condition
	I0229 01:29:51.346970  333512 kubeadm.go:322] 
	I0229 01:29:51.347028  333512 kubeadm.go:322] 	This error is likely caused by:
	I0229 01:29:51.347074  333512 kubeadm.go:322] 		- The kubelet is not running
	I0229 01:29:51.347215  333512 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:29:51.347227  333512 kubeadm.go:322] 
	I0229 01:29:51.347344  333512 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:29:51.347378  333512 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 01:29:51.347407  333512 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 01:29:51.347413  333512 kubeadm.go:322] 
	I0229 01:29:51.347503  333512 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:29:51.347571  333512 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 01:29:51.347584  333512 kubeadm.go:322] 
	I0229 01:29:51.347686  333512 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0229 01:29:51.347765  333512 kubeadm.go:322] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0229 01:29:51.347833  333512 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 01:29:51.347898  333512 kubeadm.go:322] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0229 01:29:51.347925  333512 kubeadm.go:322] 
	I0229 01:29:51.348004  333512 kubeadm.go:406] StartCluster complete in 3m54.112862045s
	I0229 01:29:51.348066  333512 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 01:29:51.348144  333512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 01:29:51.408370  333512 cri.go:89] found id: ""
	I0229 01:29:51.408405  333512 logs.go:276] 0 containers: []
	W0229 01:29:51.408414  333512 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:29:51.408420  333512 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 01:29:51.408481  333512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 01:29:51.462247  333512 cri.go:89] found id: ""
	I0229 01:29:51.462276  333512 logs.go:276] 0 containers: []
	W0229 01:29:51.462285  333512 logs.go:278] No container was found matching "etcd"
	I0229 01:29:51.462292  333512 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 01:29:51.462343  333512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 01:29:51.524271  333512 cri.go:89] found id: ""
	I0229 01:29:51.524297  333512 logs.go:276] 0 containers: []
	W0229 01:29:51.524309  333512 logs.go:278] No container was found matching "coredns"
	I0229 01:29:51.524317  333512 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 01:29:51.524384  333512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 01:29:51.565304  333512 cri.go:89] found id: ""
	I0229 01:29:51.565343  333512 logs.go:276] 0 containers: []
	W0229 01:29:51.565355  333512 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:29:51.565362  333512 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 01:29:51.565446  333512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 01:29:51.603219  333512 cri.go:89] found id: ""
	I0229 01:29:51.603310  333512 logs.go:276] 0 containers: []
	W0229 01:29:51.603328  333512 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:29:51.603339  333512 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 01:29:51.603427  333512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 01:29:51.641129  333512 cri.go:89] found id: ""
	I0229 01:29:51.641154  333512 logs.go:276] 0 containers: []
	W0229 01:29:51.641162  333512 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:29:51.641169  333512 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 01:29:51.641231  333512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 01:29:51.677339  333512 cri.go:89] found id: ""
	I0229 01:29:51.677369  333512 logs.go:276] 0 containers: []
	W0229 01:29:51.677382  333512 logs.go:278] No container was found matching "kindnet"
	I0229 01:29:51.677397  333512 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:29:51.677414  333512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:29:51.748795  333512 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:29:51.748826  333512 logs.go:123] Gathering logs for CRI-O ...
	I0229 01:29:51.748843  333512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 01:29:51.836096  333512 logs.go:123] Gathering logs for container status ...
	I0229 01:29:51.836136  333512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:29:51.878479  333512 logs.go:123] Gathering logs for kubelet ...
	I0229 01:29:51.878509  333512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:29:51.938793  333512 logs.go:123] Gathering logs for dmesg ...
	I0229 01:29:51.938836  333512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0229 01:29:51.954264  333512 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:27:55.303578    2351 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:27:56.324710    2351 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:27:56.325768    2351 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 01:29:51.954320  333512 out.go:239] * 
	* 
	W0229 01:29:51.954397  333512 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:27:55.303578    2351 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:27:56.324710    2351 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:27:56.325768    2351 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:27:55.303578    2351 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:27:56.324710    2351 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:27:56.325768    2351 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:29:51.954429  333512 out.go:239] * 
	* 
	W0229 01:29:51.955214  333512 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:29:51.958559  333512 out.go:177] 
	W0229 01:29:51.960074  333512 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:27:55.303578    2351 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:27:56.324710    2351 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:27:56.325768    2351 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
	W0229 01:27:55.303578    2351 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 01:27:56.324710    2351 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 01:27:56.325768    2351 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:29:51.960129  333512 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 01:29:51.960147  333512 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 01:29:51.961872  333512 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-568478 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (287.16s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (80.74s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-568478 addons enable ingress --alsologtostderr -v=5
E0229 01:30:05.516599  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:30:30.963568  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-568478 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m20.472226219s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:29:52.077826  334429 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:29:52.078291  334429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:29:52.078309  334429 out.go:304] Setting ErrFile to fd 2...
	I0229 01:29:52.078316  334429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:29:52.078782  334429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 01:29:52.079661  334429 mustload.go:65] Loading cluster: ingress-addon-legacy-568478
	I0229 01:29:52.080031  334429 config.go:182] Loaded profile config "ingress-addon-legacy-568478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 01:29:52.080056  334429 addons.go:597] checking whether the cluster is paused
	I0229 01:29:52.080132  334429 config.go:182] Loaded profile config "ingress-addon-legacy-568478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 01:29:52.080145  334429 host.go:66] Checking if "ingress-addon-legacy-568478" exists ...
	I0229 01:29:52.080478  334429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:29:52.080520  334429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:29:52.095469  334429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39925
	I0229 01:29:52.095960  334429 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:29:52.096572  334429 main.go:141] libmachine: Using API Version  1
	I0229 01:29:52.096595  334429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:29:52.096963  334429 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:29:52.097176  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetState
	I0229 01:29:52.098905  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .DriverName
	I0229 01:29:52.099181  334429 ssh_runner.go:195] Run: systemctl --version
	I0229 01:29:52.099208  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:29:52.101719  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:29:52.102154  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:29:52.102174  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:29:52.102358  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:29:52.102532  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:29:52.102724  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:29:52.102900  334429 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/id_rsa Username:docker}
	I0229 01:29:52.185001  334429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 01:29:52.185088  334429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 01:29:52.223294  334429 cri.go:89] found id: ""
	I0229 01:29:52.223344  334429 main.go:141] libmachine: Making call to close driver server
	I0229 01:29:52.223357  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .Close
	I0229 01:29:52.223730  334429 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:29:52.223760  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Closing plugin on server side
	I0229 01:29:52.223768  334429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:29:52.226191  334429 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 01:29:52.227793  334429 config.go:182] Loaded profile config "ingress-addon-legacy-568478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 01:29:52.227809  334429 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-568478"
	I0229 01:29:52.227824  334429 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-568478"
	I0229 01:29:52.227860  334429 host.go:66] Checking if "ingress-addon-legacy-568478" exists ...
	I0229 01:29:52.228111  334429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:29:52.228148  334429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:29:52.242842  334429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44987
	I0229 01:29:52.243298  334429 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:29:52.243761  334429 main.go:141] libmachine: Using API Version  1
	I0229 01:29:52.243780  334429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:29:52.244130  334429 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:29:52.244760  334429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:29:52.244845  334429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:29:52.259462  334429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36875
	I0229 01:29:52.259902  334429 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:29:52.260410  334429 main.go:141] libmachine: Using API Version  1
	I0229 01:29:52.260436  334429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:29:52.260758  334429 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:29:52.260962  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetState
	I0229 01:29:52.262433  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .DriverName
	I0229 01:29:52.264336  334429 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0229 01:29:52.265874  334429 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 01:29:52.267205  334429 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 01:29:52.268733  334429 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 01:29:52.268750  334429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0229 01:29:52.268767  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:29:52.271687  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:29:52.272162  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:29:52.272198  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:29:52.272305  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:29:52.272479  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:29:52.272656  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:29:52.272831  334429 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/id_rsa Username:docker}
	I0229 01:29:52.365661  334429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:29:52.429417  334429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:52.429454  334429 retry.go:31] will retry after 256.618813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:52.687057  334429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:29:52.764866  334429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:52.764922  334429 retry.go:31] will retry after 356.584672ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:53.122637  334429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:29:53.190878  334429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:53.190914  334429 retry.go:31] will retry after 299.735981ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:53.491546  334429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:29:53.558720  334429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:53.558761  334429 retry.go:31] will retry after 835.202153ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:54.394867  334429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:29:54.458456  334429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:54.458502  334429 retry.go:31] will retry after 1.642489884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:56.102371  334429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:29:56.189217  334429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:56.189259  334429 retry.go:31] will retry after 2.574625685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:58.765201  334429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:29:58.839973  334429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:29:58.840009  334429 retry.go:31] will retry after 2.313673867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:30:01.155551  334429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:30:01.220737  334429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:30:01.220791  334429 retry.go:31] will retry after 5.082165143s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:30:06.306669  334429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:30:06.371786  334429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:30:06.371826  334429 retry.go:31] will retry after 5.966536371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:30:12.339379  334429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:30:12.407875  334429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:30:12.407917  334429 retry.go:31] will retry after 13.73231591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:30:26.143961  334429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:30:26.248145  334429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:30:26.248211  334429 retry.go:31] will retry after 17.514328474s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:30:43.763423  334429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:30:43.826141  334429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:30:43.826189  334429 retry.go:31] will retry after 28.583301201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:12.410521  334429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:31:12.476009  334429 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:12.476078  334429 main.go:141] libmachine: Making call to close driver server
	I0229 01:31:12.476093  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .Close
	I0229 01:31:12.476425  334429 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:31:12.476453  334429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:31:12.476464  334429 main.go:141] libmachine: Making call to close driver server
	I0229 01:31:12.476473  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .Close
	I0229 01:31:12.476735  334429 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:31:12.476751  334429 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Closing plugin on server side
	I0229 01:31:12.476757  334429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:31:12.476779  334429 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-568478"
	I0229 01:31:12.478938  334429 out.go:177] * Verifying ingress addon...
	I0229 01:31:12.481402  334429 out.go:177] 
	W0229 01:31:12.482921  334429 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-568478" does not exist: client config: context "ingress-addon-legacy-568478" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-568478" does not exist: client config: context "ingress-addon-legacy-568478" does not exist]
	W0229 01:31:12.482940  334429 out.go:239] * 
	* 
	W0229 01:31:12.485502  334429 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:31:12.487017  334429 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-568478 -n ingress-addon-legacy-568478
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-568478 -n ingress-addon-legacy-568478: exit status 6 (267.175533ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:31:12.742140  334639 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-568478" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-568478" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (80.74s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (99.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-568478 addons enable ingress-dns --alsologtostderr -v=5
E0229 01:31:52.884036  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-568478 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m39.19407056s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:31:12.814769  334670 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:31:12.814955  334670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:31:12.814970  334670 out.go:304] Setting ErrFile to fd 2...
	I0229 01:31:12.814976  334670 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:31:12.815191  334670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 01:31:12.815494  334670 mustload.go:65] Loading cluster: ingress-addon-legacy-568478
	I0229 01:31:12.815859  334670 config.go:182] Loaded profile config "ingress-addon-legacy-568478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 01:31:12.815886  334670 addons.go:597] checking whether the cluster is paused
	I0229 01:31:12.815994  334670 config.go:182] Loaded profile config "ingress-addon-legacy-568478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 01:31:12.816012  334670 host.go:66] Checking if "ingress-addon-legacy-568478" exists ...
	I0229 01:31:12.816414  334670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:31:12.816481  334670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:31:12.831320  334670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44199
	I0229 01:31:12.831802  334670 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:31:12.832342  334670 main.go:141] libmachine: Using API Version  1
	I0229 01:31:12.832378  334670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:31:12.832769  334670 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:31:12.833003  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetState
	I0229 01:31:12.834525  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .DriverName
	I0229 01:31:12.834743  334670 ssh_runner.go:195] Run: systemctl --version
	I0229 01:31:12.834767  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:31:12.836672  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:31:12.836988  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:31:12.837013  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:31:12.837147  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:31:12.837308  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:31:12.837471  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:31:12.837606  334670 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/id_rsa Username:docker}
	I0229 01:31:12.916832  334670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 01:31:12.916925  334670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 01:31:12.963003  334670 cri.go:89] found id: ""
	I0229 01:31:12.963063  334670 main.go:141] libmachine: Making call to close driver server
	I0229 01:31:12.963079  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .Close
	I0229 01:31:12.963418  334670 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:31:12.963443  334670 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:31:12.965837  334670 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 01:31:12.967507  334670 config.go:182] Loaded profile config "ingress-addon-legacy-568478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0229 01:31:12.967525  334670 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-568478"
	I0229 01:31:12.967534  334670 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-568478"
	I0229 01:31:12.967575  334670 host.go:66] Checking if "ingress-addon-legacy-568478" exists ...
	I0229 01:31:12.967887  334670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:31:12.967939  334670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:31:12.982652  334670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0229 01:31:12.983109  334670 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:31:12.983608  334670 main.go:141] libmachine: Using API Version  1
	I0229 01:31:12.983624  334670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:31:12.984051  334670 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:31:12.984553  334670 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:31:12.984591  334670 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:31:12.999386  334670 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I0229 01:31:12.999789  334670 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:31:13.000270  334670 main.go:141] libmachine: Using API Version  1
	I0229 01:31:13.000293  334670 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:31:13.000653  334670 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:31:13.000838  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetState
	I0229 01:31:13.002265  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .DriverName
	I0229 01:31:13.004112  334670 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0229 01:31:13.005526  334670 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 01:31:13.005544  334670 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0229 01:31:13.005563  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHHostname
	I0229 01:31:13.007934  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:31:13.008333  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:89:37", ip: ""} in network mk-ingress-addon-legacy-568478: {Iface:virbr1 ExpiryTime:2024-02-29 02:25:39 +0000 UTC Type:0 Mac:52:54:00:38:89:37 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ingress-addon-legacy-568478 Clientid:01:52:54:00:38:89:37}
	I0229 01:31:13.008367  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | domain ingress-addon-legacy-568478 has defined IP address 192.168.39.209 and MAC address 52:54:00:38:89:37 in network mk-ingress-addon-legacy-568478
	I0229 01:31:13.008482  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHPort
	I0229 01:31:13.008652  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHKeyPath
	I0229 01:31:13.008797  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .GetSSHUsername
	I0229 01:31:13.008939  334670 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/ingress-addon-legacy-568478/id_rsa Username:docker}
	I0229 01:31:13.101783  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:31:13.164132  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:13.164172  334670 retry.go:31] will retry after 141.817766ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:13.306619  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:31:13.376664  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:13.376699  334670 retry.go:31] will retry after 545.045833ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:13.922563  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:31:14.000580  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:14.000621  334670 retry.go:31] will retry after 539.633185ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:14.540433  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:31:14.604725  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:14.604766  334670 retry.go:31] will retry after 935.45182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:15.540931  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:31:15.608208  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:15.608252  334670 retry.go:31] will retry after 1.383217651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:16.991738  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:31:17.056023  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:17.056056  334670 retry.go:31] will retry after 2.171053832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:19.227590  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:31:19.294145  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:19.294183  334670 retry.go:31] will retry after 2.400551898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:21.696803  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:31:21.759837  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:21.759883  334670 retry.go:31] will retry after 4.110038807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:25.872580  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:31:25.937591  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:25.937644  334670 retry.go:31] will retry after 4.191164416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:30.130834  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:31:30.199128  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:30.199164  334670 retry.go:31] will retry after 10.359087269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:40.560239  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:31:40.636313  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:40.636356  334670 retry.go:31] will retry after 18.169841475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:58.806873  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:31:58.875853  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:31:58.875902  334670 retry.go:31] will retry after 24.732864496s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:32:23.609239  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:32:23.705285  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:32:23.705334  334670 retry.go:31] will retry after 28.16559535s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:32:51.874721  334670 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:32:51.939923  334670 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:32:51.939996  334670 main.go:141] libmachine: Making call to close driver server
	I0229 01:32:51.940009  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .Close
	I0229 01:32:51.940297  334670 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:32:51.940329  334670 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:32:51.940397  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) DBG | Closing plugin on server side
	I0229 01:32:51.940464  334670 main.go:141] libmachine: Making call to close driver server
	I0229 01:32:51.940485  334670 main.go:141] libmachine: (ingress-addon-legacy-568478) Calling .Close
	I0229 01:32:51.940749  334670 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:32:51.940779  334670 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:32:51.943604  334670 out.go:177] 
	W0229 01:32:51.945348  334670 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0229 01:32:51.945374  334670 out.go:239] * 
	* 
	W0229 01:32:51.948232  334670 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:32:51.949933  334670 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-568478 -n ingress-addon-legacy-568478
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-568478 -n ingress-addon-legacy-568478: exit status 6 (240.129973ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:32:52.179271  334931 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-568478" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-568478" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (99.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-568478 -n ingress-addon-legacy-568478
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-568478 -n ingress-addon-legacy-568478: exit status 6 (232.414794ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:32:52.412551  334961 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-568478" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-568478" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.47s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-102671
mount_start_test.go:166: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p mount-start-2-102671: exit status 80 (25.230099517s)

                                                
                                                
-- stdout --
	* [mount-start-2-102671] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster mount-start-2-102671
	* Restarting existing kvm2 VM for "mount-start-2-102671" ...
	* Updating the running kvm2 "mount-start-2-102671" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	* Failed to start kvm2 VM. Running "minikube delete -p mount-start-2-102671" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:168: restart failed: "out/minikube-linux-amd64 start -p mount-start-2-102671" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p mount-start-2-102671 -n mount-start-2-102671
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p mount-start-2-102671 -n mount-start-2-102671: exit status 6 (243.900061ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:37:38.975943  337651 status.go:415] kubeconfig endpoint: extract IP: "mount-start-2-102671" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-102671" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/RestartStopped (25.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (680.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-107035
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-107035
E0229 01:41:00.878586  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-107035: exit status 82 (2m0.275393435s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-107035"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-107035" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-107035 --wait=true -v=8 --alsologtostderr
E0229 01:44:09.040414  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:44:37.826203  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:45:32.086548  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:49:09.040181  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:49:37.828507  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-107035 --wait=true -v=8 --alsologtostderr: (9m17.292743971s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-107035
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-107035 -n multinode-107035
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-107035 logs -n 25: (1.603101798s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-107035 ssh -n                                                                 | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | multinode-107035-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-107035 cp multinode-107035-m02:/home/docker/cp-test.txt                       | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4129256065/001/cp-test_multinode-107035-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-107035 ssh -n                                                                 | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | multinode-107035-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-107035 cp multinode-107035-m02:/home/docker/cp-test.txt                       | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | multinode-107035:/home/docker/cp-test_multinode-107035-m02_multinode-107035.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-107035 ssh -n                                                                 | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | multinode-107035-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-107035 ssh -n multinode-107035 sudo cat                                       | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | /home/docker/cp-test_multinode-107035-m02_multinode-107035.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-107035 cp multinode-107035-m02:/home/docker/cp-test.txt                       | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | multinode-107035-m03:/home/docker/cp-test_multinode-107035-m02_multinode-107035-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-107035 ssh -n                                                                 | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | multinode-107035-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-107035 ssh -n multinode-107035-m03 sudo cat                                   | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | /home/docker/cp-test_multinode-107035-m02_multinode-107035-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-107035 cp testdata/cp-test.txt                                                | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | multinode-107035-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-107035 ssh -n                                                                 | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | multinode-107035-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-107035 cp multinode-107035-m03:/home/docker/cp-test.txt                       | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4129256065/001/cp-test_multinode-107035-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-107035 ssh -n                                                                 | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | multinode-107035-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-107035 cp multinode-107035-m03:/home/docker/cp-test.txt                       | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | multinode-107035:/home/docker/cp-test_multinode-107035-m03_multinode-107035.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-107035 ssh -n                                                                 | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | multinode-107035-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-107035 ssh -n multinode-107035 sudo cat                                       | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | /home/docker/cp-test_multinode-107035-m03_multinode-107035.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-107035 cp multinode-107035-m03:/home/docker/cp-test.txt                       | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | multinode-107035-m02:/home/docker/cp-test_multinode-107035-m03_multinode-107035-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-107035 ssh -n                                                                 | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | multinode-107035-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-107035 ssh -n multinode-107035-m02 sudo cat                                   | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | /home/docker/cp-test_multinode-107035-m03_multinode-107035-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-107035 node stop m03                                                          | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	| node    | multinode-107035 node start                                                             | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC | 29 Feb 24 01:40 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-107035                                                                | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC |                     |
	| stop    | -p multinode-107035                                                                     | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:40 UTC |                     |
	| start   | -p multinode-107035                                                                     | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:42 UTC | 29 Feb 24 01:52 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-107035                                                                | multinode-107035 | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:42:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:42:54.324536  340990 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:42:54.324668  340990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:42:54.324679  340990 out.go:304] Setting ErrFile to fd 2...
	I0229 01:42:54.324685  340990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:42:54.324889  340990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 01:42:54.325450  340990 out.go:298] Setting JSON to false
	I0229 01:42:54.326496  340990 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5117,"bootTime":1709165857,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:42:54.326588  340990 start.go:139] virtualization: kvm guest
	I0229 01:42:54.328656  340990 out.go:177] * [multinode-107035] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:42:54.329889  340990 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:42:54.331077  340990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:42:54.329924  340990 notify.go:220] Checking for updates...
	I0229 01:42:54.333462  340990 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:42:54.334656  340990 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:42:54.335757  340990 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:42:54.337125  340990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:42:54.338736  340990 config.go:182] Loaded profile config "multinode-107035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:42:54.338825  340990 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:42:54.339209  340990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:42:54.339272  340990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:42:54.355500  340990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44499
	I0229 01:42:54.355983  340990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:42:54.356561  340990 main.go:141] libmachine: Using API Version  1
	I0229 01:42:54.356584  340990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:42:54.356936  340990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:42:54.357141  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:42:54.391610  340990 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 01:42:54.392785  340990 start.go:299] selected driver: kvm2
	I0229 01:42:54.392798  340990 start.go:903] validating driver "kvm2" against &{Name:multinode-107035 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-107035 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:42:54.392987  340990 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:42:54.393450  340990 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:42:54.393538  340990 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:42:54.408567  340990 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:42:54.409483  340990 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 01:42:54.409584  340990 cni.go:84] Creating CNI manager for ""
	I0229 01:42:54.409599  340990 cni.go:136] 3 nodes found, recommending kindnet
	I0229 01:42:54.409611  340990 start_flags.go:323] config:
	{Name:multinode-107035 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-107035 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:42:54.409916  340990 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:42:54.411654  340990 out.go:177] * Starting control plane node multinode-107035 in cluster multinode-107035
	I0229 01:42:54.412937  340990 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 01:42:54.412987  340990 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 01:42:54.412998  340990 cache.go:56] Caching tarball of preloaded images
	I0229 01:42:54.413077  340990 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 01:42:54.413090  340990 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 01:42:54.413206  340990 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/config.json ...
	I0229 01:42:54.413386  340990 start.go:365] acquiring machines lock for multinode-107035: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:42:54.413425  340990 start.go:369] acquired machines lock for "multinode-107035" in 22.433µs
	I0229 01:42:54.413438  340990 start.go:96] Skipping create...Using existing machine configuration
	I0229 01:42:54.413448  340990 fix.go:54] fixHost starting: 
	I0229 01:42:54.413708  340990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:42:54.413741  340990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:42:54.428187  340990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45065
	I0229 01:42:54.428590  340990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:42:54.429054  340990 main.go:141] libmachine: Using API Version  1
	I0229 01:42:54.429074  340990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:42:54.429445  340990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:42:54.429623  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:42:54.429783  340990 main.go:141] libmachine: (multinode-107035) Calling .GetState
	I0229 01:42:54.431082  340990 fix.go:102] recreateIfNeeded on multinode-107035: state=Running err=<nil>
	W0229 01:42:54.431119  340990 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 01:42:54.432855  340990 out.go:177] * Updating the running kvm2 "multinode-107035" VM ...
	I0229 01:42:54.433995  340990 machine.go:88] provisioning docker machine ...
	I0229 01:42:54.434025  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:42:54.434339  340990 main.go:141] libmachine: (multinode-107035) Calling .GetMachineName
	I0229 01:42:54.434510  340990 buildroot.go:166] provisioning hostname "multinode-107035"
	I0229 01:42:54.434528  340990 main.go:141] libmachine: (multinode-107035) Calling .GetMachineName
	I0229 01:42:54.434693  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:42:54.437323  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:42:54.437746  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:37:55 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:42:54.437774  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:42:54.437918  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:42:54.438105  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:42:54.438261  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:42:54.438387  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:42:54.438525  340990 main.go:141] libmachine: Using SSH client type: native
	I0229 01:42:54.438728  340990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0229 01:42:54.438744  340990 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-107035 && echo "multinode-107035" | sudo tee /etc/hostname
	I0229 01:43:12.978551  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:43:19.058519  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:43:22.130529  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:43:28.210594  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:43:31.282512  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:43:37.362560  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:43:40.434558  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:43:46.514524  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:43:49.586489  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:43:55.670483  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:43:58.738510  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:44:04.818526  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:44:07.890596  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:44:13.970518  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:44:17.042477  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:44:23.122546  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:44:26.194476  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:44:32.274503  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:44:35.346479  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:44:41.426539  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:44:44.498469  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:44:50.578508  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:44:53.650506  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:44:59.730588  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:45:02.802590  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:45:08.882538  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:45:11.954511  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:45:18.034576  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:45:21.106536  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:45:27.186578  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:45:30.258565  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:45:36.338550  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:45:39.410487  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:45:45.490522  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:45:48.562502  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:45:54.642524  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:45:57.714486  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:46:03.794607  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:46:06.866545  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:46:12.946534  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:46:16.018512  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:46:22.098565  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:46:25.170485  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:46:31.250550  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:46:34.322479  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:46:40.402488  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:46:43.474511  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:46:49.554511  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:46:52.626605  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:46:58.706560  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:47:01.778604  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:47:07.858554  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:47:10.930566  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:47:17.010509  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:47:20.082501  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:47:26.162526  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:47:29.234565  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:47:35.314514  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:47:38.386573  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:47:44.466483  340990 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.183:22: connect: no route to host
	I0229 01:47:47.467448  340990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:47:47.467489  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:47:47.469702  340990 machine.go:91] provisioned docker machine in 4m53.035685059s
	I0229 01:47:47.469754  340990 fix.go:56] fixHost completed within 4m53.056307193s
	I0229 01:47:47.469760  340990 start.go:83] releasing machines lock for "multinode-107035", held for 4m53.056327233s
	W0229 01:47:47.469777  340990 start.go:694] error starting host: provision: host is not running
	W0229 01:47:47.469902  340990 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0229 01:47:47.469913  340990 start.go:709] Will try again in 5 seconds ...
	I0229 01:47:52.472965  340990 start.go:365] acquiring machines lock for multinode-107035: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:47:52.473079  340990 start.go:369] acquired machines lock for "multinode-107035" in 69.03µs
	I0229 01:47:52.473110  340990 start.go:96] Skipping create...Using existing machine configuration
	I0229 01:47:52.473122  340990 fix.go:54] fixHost starting: 
	I0229 01:47:52.473445  340990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:47:52.473473  340990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:47:52.489272  340990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43743
	I0229 01:47:52.489744  340990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:47:52.490340  340990 main.go:141] libmachine: Using API Version  1
	I0229 01:47:52.490384  340990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:47:52.490747  340990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:47:52.490901  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:47:52.491063  340990 main.go:141] libmachine: (multinode-107035) Calling .GetState
	I0229 01:47:52.492929  340990 fix.go:102] recreateIfNeeded on multinode-107035: state=Stopped err=<nil>
	I0229 01:47:52.492954  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	W0229 01:47:52.493115  340990 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 01:47:52.495624  340990 out.go:177] * Restarting existing kvm2 VM for "multinode-107035" ...
	I0229 01:47:52.497036  340990 main.go:141] libmachine: (multinode-107035) Calling .Start
	I0229 01:47:52.497194  340990 main.go:141] libmachine: (multinode-107035) Ensuring networks are active...
	I0229 01:47:52.497942  340990 main.go:141] libmachine: (multinode-107035) Ensuring network default is active
	I0229 01:47:52.498284  340990 main.go:141] libmachine: (multinode-107035) Ensuring network mk-multinode-107035 is active
	I0229 01:47:52.498638  340990 main.go:141] libmachine: (multinode-107035) Getting domain xml...
	I0229 01:47:52.499310  340990 main.go:141] libmachine: (multinode-107035) Creating domain...
	I0229 01:47:53.708702  340990 main.go:141] libmachine: (multinode-107035) Waiting to get IP...
	I0229 01:47:53.709469  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:47:53.709958  340990 main.go:141] libmachine: (multinode-107035) DBG | unable to find current IP address of domain multinode-107035 in network mk-multinode-107035
	I0229 01:47:53.709995  340990 main.go:141] libmachine: (multinode-107035) DBG | I0229 01:47:53.709902  341801 retry.go:31] will retry after 241.115293ms: waiting for machine to come up
	I0229 01:47:53.952359  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:47:53.952841  340990 main.go:141] libmachine: (multinode-107035) DBG | unable to find current IP address of domain multinode-107035 in network mk-multinode-107035
	I0229 01:47:53.952874  340990 main.go:141] libmachine: (multinode-107035) DBG | I0229 01:47:53.952777  341801 retry.go:31] will retry after 363.263337ms: waiting for machine to come up
	I0229 01:47:54.317283  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:47:54.317740  340990 main.go:141] libmachine: (multinode-107035) DBG | unable to find current IP address of domain multinode-107035 in network mk-multinode-107035
	I0229 01:47:54.317799  340990 main.go:141] libmachine: (multinode-107035) DBG | I0229 01:47:54.317713  341801 retry.go:31] will retry after 310.829815ms: waiting for machine to come up
	I0229 01:47:54.630269  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:47:54.630859  340990 main.go:141] libmachine: (multinode-107035) DBG | unable to find current IP address of domain multinode-107035 in network mk-multinode-107035
	I0229 01:47:54.630890  340990 main.go:141] libmachine: (multinode-107035) DBG | I0229 01:47:54.630825  341801 retry.go:31] will retry after 547.903255ms: waiting for machine to come up
	I0229 01:47:55.180676  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:47:55.181148  340990 main.go:141] libmachine: (multinode-107035) DBG | unable to find current IP address of domain multinode-107035 in network mk-multinode-107035
	I0229 01:47:55.181179  340990 main.go:141] libmachine: (multinode-107035) DBG | I0229 01:47:55.181099  341801 retry.go:31] will retry after 467.164282ms: waiting for machine to come up
	I0229 01:47:55.649859  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:47:55.650426  340990 main.go:141] libmachine: (multinode-107035) DBG | unable to find current IP address of domain multinode-107035 in network mk-multinode-107035
	I0229 01:47:55.650452  340990 main.go:141] libmachine: (multinode-107035) DBG | I0229 01:47:55.650389  341801 retry.go:31] will retry after 716.643614ms: waiting for machine to come up
	I0229 01:47:56.368378  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:47:56.368837  340990 main.go:141] libmachine: (multinode-107035) DBG | unable to find current IP address of domain multinode-107035 in network mk-multinode-107035
	I0229 01:47:56.368867  340990 main.go:141] libmachine: (multinode-107035) DBG | I0229 01:47:56.368798  341801 retry.go:31] will retry after 1.066230903s: waiting for machine to come up
	I0229 01:47:57.437129  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:47:57.437672  340990 main.go:141] libmachine: (multinode-107035) DBG | unable to find current IP address of domain multinode-107035 in network mk-multinode-107035
	I0229 01:47:57.437704  340990 main.go:141] libmachine: (multinode-107035) DBG | I0229 01:47:57.437617  341801 retry.go:31] will retry after 1.237208011s: waiting for machine to come up
	I0229 01:47:58.676929  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:47:58.677362  340990 main.go:141] libmachine: (multinode-107035) DBG | unable to find current IP address of domain multinode-107035 in network mk-multinode-107035
	I0229 01:47:58.677384  340990 main.go:141] libmachine: (multinode-107035) DBG | I0229 01:47:58.677332  341801 retry.go:31] will retry after 1.262635928s: waiting for machine to come up
	I0229 01:47:59.941817  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:47:59.942349  340990 main.go:141] libmachine: (multinode-107035) DBG | unable to find current IP address of domain multinode-107035 in network mk-multinode-107035
	I0229 01:47:59.942376  340990 main.go:141] libmachine: (multinode-107035) DBG | I0229 01:47:59.942322  341801 retry.go:31] will retry after 2.222404587s: waiting for machine to come up
	I0229 01:48:02.167796  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:02.168220  340990 main.go:141] libmachine: (multinode-107035) DBG | unable to find current IP address of domain multinode-107035 in network mk-multinode-107035
	I0229 01:48:02.168245  340990 main.go:141] libmachine: (multinode-107035) DBG | I0229 01:48:02.168177  341801 retry.go:31] will retry after 1.915974355s: waiting for machine to come up
	I0229 01:48:04.086604  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:04.087030  340990 main.go:141] libmachine: (multinode-107035) DBG | unable to find current IP address of domain multinode-107035 in network mk-multinode-107035
	I0229 01:48:04.087060  340990 main.go:141] libmachine: (multinode-107035) DBG | I0229 01:48:04.086967  341801 retry.go:31] will retry after 2.518562725s: waiting for machine to come up
	I0229 01:48:06.608596  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:06.609112  340990 main.go:141] libmachine: (multinode-107035) DBG | unable to find current IP address of domain multinode-107035 in network mk-multinode-107035
	I0229 01:48:06.609134  340990 main.go:141] libmachine: (multinode-107035) DBG | I0229 01:48:06.609076  341801 retry.go:31] will retry after 3.889127175s: waiting for machine to come up
	I0229 01:48:10.501414  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.501883  340990 main.go:141] libmachine: (multinode-107035) Found IP for machine: 192.168.39.183
	I0229 01:48:10.501916  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has current primary IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.501922  340990 main.go:141] libmachine: (multinode-107035) Reserving static IP address...
	I0229 01:48:10.502438  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "multinode-107035", mac: "52:54:00:dd:8b:7f", ip: "192.168.39.183"} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:10.502467  340990 main.go:141] libmachine: (multinode-107035) DBG | skip adding static IP to network mk-multinode-107035 - found existing host DHCP lease matching {name: "multinode-107035", mac: "52:54:00:dd:8b:7f", ip: "192.168.39.183"}
	I0229 01:48:10.502477  340990 main.go:141] libmachine: (multinode-107035) Reserved static IP address: 192.168.39.183
	I0229 01:48:10.502486  340990 main.go:141] libmachine: (multinode-107035) Waiting for SSH to be available...
	I0229 01:48:10.502494  340990 main.go:141] libmachine: (multinode-107035) DBG | Getting to WaitForSSH function...
	I0229 01:48:10.504926  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.505310  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:10.505350  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.505452  340990 main.go:141] libmachine: (multinode-107035) DBG | Using SSH client type: external
	I0229 01:48:10.505485  340990 main.go:141] libmachine: (multinode-107035) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035/id_rsa (-rw-------)
	I0229 01:48:10.505518  340990 main.go:141] libmachine: (multinode-107035) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 01:48:10.505536  340990 main.go:141] libmachine: (multinode-107035) DBG | About to run SSH command:
	I0229 01:48:10.505559  340990 main.go:141] libmachine: (multinode-107035) DBG | exit 0
	I0229 01:48:10.630541  340990 main.go:141] libmachine: (multinode-107035) DBG | SSH cmd err, output: <nil>: 
	I0229 01:48:10.630940  340990 main.go:141] libmachine: (multinode-107035) Calling .GetConfigRaw
	I0229 01:48:10.631610  340990 main.go:141] libmachine: (multinode-107035) Calling .GetIP
	I0229 01:48:10.633975  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.634380  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:10.634424  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.634700  340990 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/config.json ...
	I0229 01:48:10.634892  340990 machine.go:88] provisioning docker machine ...
	I0229 01:48:10.634909  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:48:10.635095  340990 main.go:141] libmachine: (multinode-107035) Calling .GetMachineName
	I0229 01:48:10.635260  340990 buildroot.go:166] provisioning hostname "multinode-107035"
	I0229 01:48:10.635281  340990 main.go:141] libmachine: (multinode-107035) Calling .GetMachineName
	I0229 01:48:10.635422  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:48:10.637346  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.637710  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:10.637734  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.637883  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:48:10.638039  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:48:10.638184  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:48:10.638364  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:48:10.638611  340990 main.go:141] libmachine: Using SSH client type: native
	I0229 01:48:10.638902  340990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0229 01:48:10.638916  340990 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-107035 && echo "multinode-107035" | sudo tee /etc/hostname
	I0229 01:48:10.758135  340990 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-107035
	
	I0229 01:48:10.758167  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:48:10.760818  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.761186  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:10.761223  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.761401  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:48:10.761630  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:48:10.761778  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:48:10.761917  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:48:10.762042  340990 main.go:141] libmachine: Using SSH client type: native
	I0229 01:48:10.762282  340990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0229 01:48:10.762316  340990 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-107035' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-107035/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-107035' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:48:10.876723  340990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:48:10.876752  340990 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 01:48:10.876815  340990 buildroot.go:174] setting up certificates
	I0229 01:48:10.876829  340990 provision.go:83] configureAuth start
	I0229 01:48:10.876843  340990 main.go:141] libmachine: (multinode-107035) Calling .GetMachineName
	I0229 01:48:10.877186  340990 main.go:141] libmachine: (multinode-107035) Calling .GetIP
	I0229 01:48:10.879723  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.880110  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:10.880142  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.880309  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:48:10.882452  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.882791  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:10.882818  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:10.882982  340990 provision.go:138] copyHostCerts
	I0229 01:48:10.883010  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 01:48:10.883044  340990 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 01:48:10.883053  340990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 01:48:10.883119  340990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 01:48:10.883227  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 01:48:10.883249  340990 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 01:48:10.883255  340990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 01:48:10.883280  340990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 01:48:10.883326  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 01:48:10.883342  340990 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 01:48:10.883355  340990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 01:48:10.883375  340990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 01:48:10.883428  340990 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.multinode-107035 san=[192.168.39.183 192.168.39.183 localhost 127.0.0.1 minikube multinode-107035]
	I0229 01:48:11.088718  340990 provision.go:172] copyRemoteCerts
	I0229 01:48:11.088801  340990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:48:11.088831  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:48:11.091762  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.092044  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:11.092071  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.092253  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:48:11.092491  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:48:11.092663  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:48:11.092817  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035/id_rsa Username:docker}
	I0229 01:48:11.178266  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 01:48:11.178350  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 01:48:11.205462  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 01:48:11.205534  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0229 01:48:11.231338  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 01:48:11.231436  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 01:48:11.257070  340990 provision.go:86] duration metric: configureAuth took 380.224055ms
	I0229 01:48:11.257105  340990 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:48:11.257355  340990 config.go:182] Loaded profile config "multinode-107035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:48:11.257444  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:48:11.260423  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.260892  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:11.260920  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.261097  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:48:11.261329  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:48:11.261507  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:48:11.261714  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:48:11.261917  340990 main.go:141] libmachine: Using SSH client type: native
	I0229 01:48:11.262114  340990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0229 01:48:11.262134  340990 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 01:48:11.548104  340990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 01:48:11.548139  340990 machine.go:91] provisioned docker machine in 913.232701ms
	I0229 01:48:11.548154  340990 start.go:300] post-start starting for "multinode-107035" (driver="kvm2")
	I0229 01:48:11.548170  340990 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:48:11.548189  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:48:11.548562  340990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:48:11.548594  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:48:11.551148  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.551607  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:11.551638  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.551769  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:48:11.551982  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:48:11.552184  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:48:11.552359  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035/id_rsa Username:docker}
	I0229 01:48:11.638054  340990 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:48:11.642898  340990 command_runner.go:130] > NAME=Buildroot
	I0229 01:48:11.642922  340990 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 01:48:11.642928  340990 command_runner.go:130] > ID=buildroot
	I0229 01:48:11.642935  340990 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 01:48:11.642942  340990 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 01:48:11.642989  340990 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:48:11.643007  340990 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 01:48:11.643104  340990 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 01:48:11.643191  340990 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 01:48:11.643205  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> /etc/ssl/certs/3238852.pem
	I0229 01:48:11.643289  340990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 01:48:11.653583  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 01:48:11.682312  340990 start.go:303] post-start completed in 134.141673ms
	I0229 01:48:11.682339  340990 fix.go:56] fixHost completed within 19.20921826s
	I0229 01:48:11.682362  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:48:11.684987  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.685394  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:11.685420  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.685569  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:48:11.685742  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:48:11.685916  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:48:11.686072  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:48:11.686263  340990 main.go:141] libmachine: Using SSH client type: native
	I0229 01:48:11.686490  340990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0229 01:48:11.686502  340990 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 01:48:11.791611  340990 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709171291.772685706
	
	I0229 01:48:11.791635  340990 fix.go:206] guest clock: 1709171291.772685706
	I0229 01:48:11.791642  340990 fix.go:219] Guest: 2024-02-29 01:48:11.772685706 +0000 UTC Remote: 2024-02-29 01:48:11.68234329 +0000 UTC m=+317.408238335 (delta=90.342416ms)
	I0229 01:48:11.791687  340990 fix.go:190] guest clock delta is within tolerance: 90.342416ms
	I0229 01:48:11.791699  340990 start.go:83] releasing machines lock for "multinode-107035", held for 19.31860946s
	I0229 01:48:11.791731  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:48:11.792021  340990 main.go:141] libmachine: (multinode-107035) Calling .GetIP
	I0229 01:48:11.794681  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.794990  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:11.795017  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.795192  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:48:11.795706  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:48:11.795897  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:48:11.795986  340990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:48:11.796042  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:48:11.796130  340990 ssh_runner.go:195] Run: cat /version.json
	I0229 01:48:11.796148  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:48:11.798622  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.798811  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.799018  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:11.799054  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.799194  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:48:11.799311  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:11.799351  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:48:11.799361  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:11.799499  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:48:11.799519  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:48:11.799656  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:48:11.799663  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035/id_rsa Username:docker}
	I0229 01:48:11.799800  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:48:11.799934  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035/id_rsa Username:docker}
	I0229 01:48:11.875790  340990 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0229 01:48:11.876110  340990 ssh_runner.go:195] Run: systemctl --version
	I0229 01:48:11.901466  340990 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 01:48:11.901504  340990 command_runner.go:130] > systemd 252 (252)
	I0229 01:48:11.901521  340990 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0229 01:48:11.901580  340990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 01:48:12.052974  340990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 01:48:12.060009  340990 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 01:48:12.060148  340990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:48:12.060227  340990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 01:48:12.079934  340990 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 01:48:12.080018  340990 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 01:48:12.080034  340990 start.go:475] detecting cgroup driver to use...
	I0229 01:48:12.080115  340990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:48:12.099483  340990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:48:12.116821  340990 docker.go:217] disabling cri-docker service (if available) ...
	I0229 01:48:12.116884  340990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 01:48:12.133056  340990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 01:48:12.149612  340990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 01:48:12.167192  340990 command_runner.go:130] ! Removed "/etc/systemd/system/sockets.target.wants/cri-docker.socket".
	I0229 01:48:12.269931  340990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 01:48:12.286332  340990 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0229 01:48:12.427507  340990 docker.go:233] disabling docker service ...
	I0229 01:48:12.427592  340990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 01:48:12.444711  340990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 01:48:12.459741  340990 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0229 01:48:12.459849  340990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 01:48:12.596542  340990 command_runner.go:130] ! Removed "/etc/systemd/system/sockets.target.wants/docker.socket".
	I0229 01:48:12.596622  340990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 01:48:12.612223  340990 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0229 01:48:12.612594  340990 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0229 01:48:12.717464  340990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 01:48:12.733014  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:48:12.753025  340990 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0229 01:48:12.753350  340990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 01:48:12.753425  340990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:48:12.764928  340990 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 01:48:12.765002  340990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:48:12.776489  340990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:48:12.787517  340990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:48:12.798716  340990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:48:12.810155  340990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:48:12.819965  340990 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 01:48:12.820008  340990 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 01:48:12.820049  340990 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 01:48:12.833761  340990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:48:12.843756  340990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:48:12.960341  340990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 01:48:13.103142  340990 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 01:48:13.103229  340990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 01:48:13.109014  340990 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0229 01:48:13.109037  340990 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 01:48:13.109044  340990 command_runner.go:130] > Device: 0,22	Inode: 801         Links: 1
	I0229 01:48:13.109051  340990 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 01:48:13.109056  340990 command_runner.go:130] > Access: 2024-02-29 01:48:13.076220996 +0000
	I0229 01:48:13.109061  340990 command_runner.go:130] > Modify: 2024-02-29 01:48:13.076220996 +0000
	I0229 01:48:13.109066  340990 command_runner.go:130] > Change: 2024-02-29 01:48:13.076220996 +0000
	I0229 01:48:13.109070  340990 command_runner.go:130] >  Birth: -
	I0229 01:48:13.109198  340990 start.go:543] Will wait 60s for crictl version
	I0229 01:48:13.109253  340990 ssh_runner.go:195] Run: which crictl
	I0229 01:48:13.113224  340990 command_runner.go:130] > /usr/bin/crictl
	I0229 01:48:13.113403  340990 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 01:48:13.152795  340990 command_runner.go:130] > Version:  0.1.0
	I0229 01:48:13.152819  340990 command_runner.go:130] > RuntimeName:  cri-o
	I0229 01:48:13.152824  340990 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0229 01:48:13.152832  340990 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 01:48:13.154399  340990 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 01:48:13.154461  340990 ssh_runner.go:195] Run: crio --version
	I0229 01:48:13.185463  340990 command_runner.go:130] > crio version 1.29.1
	I0229 01:48:13.185491  340990 command_runner.go:130] > Version:        1.29.1
	I0229 01:48:13.185500  340990 command_runner.go:130] > GitCommit:      unknown
	I0229 01:48:13.185506  340990 command_runner.go:130] > GitCommitDate:  unknown
	I0229 01:48:13.185522  340990 command_runner.go:130] > GitTreeState:   clean
	I0229 01:48:13.185531  340990 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0229 01:48:13.185536  340990 command_runner.go:130] > GoVersion:      go1.21.6
	I0229 01:48:13.185547  340990 command_runner.go:130] > Compiler:       gc
	I0229 01:48:13.185558  340990 command_runner.go:130] > Platform:       linux/amd64
	I0229 01:48:13.185565  340990 command_runner.go:130] > Linkmode:       dynamic
	I0229 01:48:13.185573  340990 command_runner.go:130] > BuildTags:      
	I0229 01:48:13.185587  340990 command_runner.go:130] >   containers_image_ostree_stub
	I0229 01:48:13.185594  340990 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0229 01:48:13.185599  340990 command_runner.go:130] >   btrfs_noversion
	I0229 01:48:13.185609  340990 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0229 01:48:13.185616  340990 command_runner.go:130] >   libdm_no_deferred_remove
	I0229 01:48:13.185622  340990 command_runner.go:130] >   seccomp
	I0229 01:48:13.185629  340990 command_runner.go:130] > LDFlags:          unknown
	I0229 01:48:13.185635  340990 command_runner.go:130] > SeccompEnabled:   true
	I0229 01:48:13.185642  340990 command_runner.go:130] > AppArmorEnabled:  false
	I0229 01:48:13.185763  340990 ssh_runner.go:195] Run: crio --version
	I0229 01:48:13.215701  340990 command_runner.go:130] > crio version 1.29.1
	I0229 01:48:13.215731  340990 command_runner.go:130] > Version:        1.29.1
	I0229 01:48:13.215738  340990 command_runner.go:130] > GitCommit:      unknown
	I0229 01:48:13.215744  340990 command_runner.go:130] > GitCommitDate:  unknown
	I0229 01:48:13.215751  340990 command_runner.go:130] > GitTreeState:   clean
	I0229 01:48:13.215758  340990 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0229 01:48:13.215765  340990 command_runner.go:130] > GoVersion:      go1.21.6
	I0229 01:48:13.215778  340990 command_runner.go:130] > Compiler:       gc
	I0229 01:48:13.215784  340990 command_runner.go:130] > Platform:       linux/amd64
	I0229 01:48:13.215788  340990 command_runner.go:130] > Linkmode:       dynamic
	I0229 01:48:13.215792  340990 command_runner.go:130] > BuildTags:      
	I0229 01:48:13.215797  340990 command_runner.go:130] >   containers_image_ostree_stub
	I0229 01:48:13.215801  340990 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0229 01:48:13.215805  340990 command_runner.go:130] >   btrfs_noversion
	I0229 01:48:13.215809  340990 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0229 01:48:13.215825  340990 command_runner.go:130] >   libdm_no_deferred_remove
	I0229 01:48:13.215836  340990 command_runner.go:130] >   seccomp
	I0229 01:48:13.215840  340990 command_runner.go:130] > LDFlags:          unknown
	I0229 01:48:13.215844  340990 command_runner.go:130] > SeccompEnabled:   true
	I0229 01:48:13.215848  340990 command_runner.go:130] > AppArmorEnabled:  false
	I0229 01:48:13.218673  340990 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 01:48:13.219970  340990 main.go:141] libmachine: (multinode-107035) Calling .GetIP
	I0229 01:48:13.222802  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:13.223212  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:48:13.223242  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:48:13.223436  340990 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 01:48:13.228194  340990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:48:13.241601  340990 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 01:48:13.241666  340990 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 01:48:13.281463  340990 command_runner.go:130] > {
	I0229 01:48:13.281486  340990 command_runner.go:130] >   "images": [
	I0229 01:48:13.281489  340990 command_runner.go:130] >     {
	I0229 01:48:13.281501  340990 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0229 01:48:13.281508  340990 command_runner.go:130] >       "repoTags": [
	I0229 01:48:13.281517  340990 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0229 01:48:13.281523  340990 command_runner.go:130] >       ],
	I0229 01:48:13.281529  340990 command_runner.go:130] >       "repoDigests": [
	I0229 01:48:13.281542  340990 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0229 01:48:13.281554  340990 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0229 01:48:13.281559  340990 command_runner.go:130] >       ],
	I0229 01:48:13.281569  340990 command_runner.go:130] >       "size": "65258016",
	I0229 01:48:13.281574  340990 command_runner.go:130] >       "uid": null,
	I0229 01:48:13.281582  340990 command_runner.go:130] >       "username": "",
	I0229 01:48:13.281589  340990 command_runner.go:130] >       "spec": null,
	I0229 01:48:13.281599  340990 command_runner.go:130] >       "pinned": false
	I0229 01:48:13.281607  340990 command_runner.go:130] >     },
	I0229 01:48:13.281612  340990 command_runner.go:130] >     {
	I0229 01:48:13.281623  340990 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0229 01:48:13.281632  340990 command_runner.go:130] >       "repoTags": [
	I0229 01:48:13.281639  340990 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0229 01:48:13.281643  340990 command_runner.go:130] >       ],
	I0229 01:48:13.281647  340990 command_runner.go:130] >       "repoDigests": [
	I0229 01:48:13.281654  340990 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0229 01:48:13.281663  340990 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0229 01:48:13.281671  340990 command_runner.go:130] >       ],
	I0229 01:48:13.281675  340990 command_runner.go:130] >       "size": "750414",
	I0229 01:48:13.281681  340990 command_runner.go:130] >       "uid": {
	I0229 01:48:13.281685  340990 command_runner.go:130] >         "value": "65535"
	I0229 01:48:13.281700  340990 command_runner.go:130] >       },
	I0229 01:48:13.281712  340990 command_runner.go:130] >       "username": "",
	I0229 01:48:13.281718  340990 command_runner.go:130] >       "spec": null,
	I0229 01:48:13.281723  340990 command_runner.go:130] >       "pinned": true
	I0229 01:48:13.281728  340990 command_runner.go:130] >     }
	I0229 01:48:13.281732  340990 command_runner.go:130] >   ]
	I0229 01:48:13.281737  340990 command_runner.go:130] > }
	I0229 01:48:13.281882  340990 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 01:48:13.281931  340990 ssh_runner.go:195] Run: which lz4
	I0229 01:48:13.286453  340990 command_runner.go:130] > /usr/bin/lz4
	I0229 01:48:13.286605  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 01:48:13.286712  340990 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 01:48:13.291522  340990 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 01:48:13.291764  340990 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 01:48:13.291788  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 01:48:15.117680  340990 crio.go:444] Took 1.831004 seconds to copy over tarball
	I0229 01:48:15.117777  340990 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 01:48:17.811179  340990 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.693368341s)
	I0229 01:48:17.811205  340990 crio.go:451] Took 2.693497 seconds to extract the tarball
	I0229 01:48:17.811215  340990 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 01:48:17.853454  340990 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 01:48:17.897981  340990 command_runner.go:130] > {
	I0229 01:48:17.898009  340990 command_runner.go:130] >   "images": [
	I0229 01:48:17.898014  340990 command_runner.go:130] >     {
	I0229 01:48:17.898026  340990 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0229 01:48:17.898033  340990 command_runner.go:130] >       "repoTags": [
	I0229 01:48:17.898040  340990 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0229 01:48:17.898045  340990 command_runner.go:130] >       ],
	I0229 01:48:17.898051  340990 command_runner.go:130] >       "repoDigests": [
	I0229 01:48:17.898064  340990 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0229 01:48:17.898076  340990 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0229 01:48:17.898082  340990 command_runner.go:130] >       ],
	I0229 01:48:17.898090  340990 command_runner.go:130] >       "size": "65258016",
	I0229 01:48:17.898100  340990 command_runner.go:130] >       "uid": null,
	I0229 01:48:17.898108  340990 command_runner.go:130] >       "username": "",
	I0229 01:48:17.898121  340990 command_runner.go:130] >       "spec": null,
	I0229 01:48:17.898131  340990 command_runner.go:130] >       "pinned": false
	I0229 01:48:17.898138  340990 command_runner.go:130] >     },
	I0229 01:48:17.898145  340990 command_runner.go:130] >     {
	I0229 01:48:17.898156  340990 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0229 01:48:17.898165  340990 command_runner.go:130] >       "repoTags": [
	I0229 01:48:17.898175  340990 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0229 01:48:17.898193  340990 command_runner.go:130] >       ],
	I0229 01:48:17.898204  340990 command_runner.go:130] >       "repoDigests": [
	I0229 01:48:17.898218  340990 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0229 01:48:17.898243  340990 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0229 01:48:17.898251  340990 command_runner.go:130] >       ],
	I0229 01:48:17.898265  340990 command_runner.go:130] >       "size": "31470524",
	I0229 01:48:17.898275  340990 command_runner.go:130] >       "uid": null,
	I0229 01:48:17.898285  340990 command_runner.go:130] >       "username": "",
	I0229 01:48:17.898292  340990 command_runner.go:130] >       "spec": null,
	I0229 01:48:17.898299  340990 command_runner.go:130] >       "pinned": false
	I0229 01:48:17.898308  340990 command_runner.go:130] >     },
	I0229 01:48:17.898322  340990 command_runner.go:130] >     {
	I0229 01:48:17.898336  340990 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0229 01:48:17.898346  340990 command_runner.go:130] >       "repoTags": [
	I0229 01:48:17.898355  340990 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0229 01:48:17.898364  340990 command_runner.go:130] >       ],
	I0229 01:48:17.898371  340990 command_runner.go:130] >       "repoDigests": [
	I0229 01:48:17.898391  340990 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0229 01:48:17.898407  340990 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0229 01:48:17.898416  340990 command_runner.go:130] >       ],
	I0229 01:48:17.898422  340990 command_runner.go:130] >       "size": "53621675",
	I0229 01:48:17.898431  340990 command_runner.go:130] >       "uid": null,
	I0229 01:48:17.898438  340990 command_runner.go:130] >       "username": "",
	I0229 01:48:17.898446  340990 command_runner.go:130] >       "spec": null,
	I0229 01:48:17.898453  340990 command_runner.go:130] >       "pinned": false
	I0229 01:48:17.898463  340990 command_runner.go:130] >     },
	I0229 01:48:17.898469  340990 command_runner.go:130] >     {
	I0229 01:48:17.898480  340990 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0229 01:48:17.898490  340990 command_runner.go:130] >       "repoTags": [
	I0229 01:48:17.898499  340990 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0229 01:48:17.898507  340990 command_runner.go:130] >       ],
	I0229 01:48:17.898516  340990 command_runner.go:130] >       "repoDigests": [
	I0229 01:48:17.898531  340990 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0229 01:48:17.898546  340990 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0229 01:48:17.898568  340990 command_runner.go:130] >       ],
	I0229 01:48:17.898584  340990 command_runner.go:130] >       "size": "295456551",
	I0229 01:48:17.898596  340990 command_runner.go:130] >       "uid": {
	I0229 01:48:17.898604  340990 command_runner.go:130] >         "value": "0"
	I0229 01:48:17.898610  340990 command_runner.go:130] >       },
	I0229 01:48:17.898619  340990 command_runner.go:130] >       "username": "",
	I0229 01:48:17.898626  340990 command_runner.go:130] >       "spec": null,
	I0229 01:48:17.898636  340990 command_runner.go:130] >       "pinned": false
	I0229 01:48:17.898645  340990 command_runner.go:130] >     },
	I0229 01:48:17.898651  340990 command_runner.go:130] >     {
	I0229 01:48:17.898665  340990 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0229 01:48:17.898675  340990 command_runner.go:130] >       "repoTags": [
	I0229 01:48:17.898687  340990 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0229 01:48:17.898695  340990 command_runner.go:130] >       ],
	I0229 01:48:17.898702  340990 command_runner.go:130] >       "repoDigests": [
	I0229 01:48:17.898717  340990 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0229 01:48:17.898733  340990 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0229 01:48:17.898741  340990 command_runner.go:130] >       ],
	I0229 01:48:17.898749  340990 command_runner.go:130] >       "size": "127226832",
	I0229 01:48:17.898759  340990 command_runner.go:130] >       "uid": {
	I0229 01:48:17.898765  340990 command_runner.go:130] >         "value": "0"
	I0229 01:48:17.898774  340990 command_runner.go:130] >       },
	I0229 01:48:17.898782  340990 command_runner.go:130] >       "username": "",
	I0229 01:48:17.898790  340990 command_runner.go:130] >       "spec": null,
	I0229 01:48:17.898798  340990 command_runner.go:130] >       "pinned": false
	I0229 01:48:17.898806  340990 command_runner.go:130] >     },
	I0229 01:48:17.898813  340990 command_runner.go:130] >     {
	I0229 01:48:17.898823  340990 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0229 01:48:17.898832  340990 command_runner.go:130] >       "repoTags": [
	I0229 01:48:17.898842  340990 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0229 01:48:17.898852  340990 command_runner.go:130] >       ],
	I0229 01:48:17.898860  340990 command_runner.go:130] >       "repoDigests": [
	I0229 01:48:17.898876  340990 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0229 01:48:17.898892  340990 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0229 01:48:17.898900  340990 command_runner.go:130] >       ],
	I0229 01:48:17.898907  340990 command_runner.go:130] >       "size": "123261750",
	I0229 01:48:17.898917  340990 command_runner.go:130] >       "uid": {
	I0229 01:48:17.898924  340990 command_runner.go:130] >         "value": "0"
	I0229 01:48:17.898939  340990 command_runner.go:130] >       },
	I0229 01:48:17.898949  340990 command_runner.go:130] >       "username": "",
	I0229 01:48:17.898956  340990 command_runner.go:130] >       "spec": null,
	I0229 01:48:17.898965  340990 command_runner.go:130] >       "pinned": false
	I0229 01:48:17.898971  340990 command_runner.go:130] >     },
	I0229 01:48:17.898979  340990 command_runner.go:130] >     {
	I0229 01:48:17.898991  340990 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0229 01:48:17.899000  340990 command_runner.go:130] >       "repoTags": [
	I0229 01:48:17.899010  340990 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0229 01:48:17.899019  340990 command_runner.go:130] >       ],
	I0229 01:48:17.899027  340990 command_runner.go:130] >       "repoDigests": [
	I0229 01:48:17.899043  340990 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0229 01:48:17.899057  340990 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0229 01:48:17.899066  340990 command_runner.go:130] >       ],
	I0229 01:48:17.899073  340990 command_runner.go:130] >       "size": "74749335",
	I0229 01:48:17.899082  340990 command_runner.go:130] >       "uid": null,
	I0229 01:48:17.899089  340990 command_runner.go:130] >       "username": "",
	I0229 01:48:17.899099  340990 command_runner.go:130] >       "spec": null,
	I0229 01:48:17.899107  340990 command_runner.go:130] >       "pinned": false
	I0229 01:48:17.899116  340990 command_runner.go:130] >     },
	I0229 01:48:17.899125  340990 command_runner.go:130] >     {
	I0229 01:48:17.899139  340990 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0229 01:48:17.899149  340990 command_runner.go:130] >       "repoTags": [
	I0229 01:48:17.899159  340990 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0229 01:48:17.899167  340990 command_runner.go:130] >       ],
	I0229 01:48:17.899175  340990 command_runner.go:130] >       "repoDigests": [
	I0229 01:48:17.899207  340990 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0229 01:48:17.899223  340990 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0229 01:48:17.899232  340990 command_runner.go:130] >       ],
	I0229 01:48:17.899240  340990 command_runner.go:130] >       "size": "61551410",
	I0229 01:48:17.899248  340990 command_runner.go:130] >       "uid": {
	I0229 01:48:17.899256  340990 command_runner.go:130] >         "value": "0"
	I0229 01:48:17.899263  340990 command_runner.go:130] >       },
	I0229 01:48:17.899271  340990 command_runner.go:130] >       "username": "",
	I0229 01:48:17.899280  340990 command_runner.go:130] >       "spec": null,
	I0229 01:48:17.899287  340990 command_runner.go:130] >       "pinned": false
	I0229 01:48:17.899303  340990 command_runner.go:130] >     },
	I0229 01:48:17.899328  340990 command_runner.go:130] >     {
	I0229 01:48:17.899343  340990 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0229 01:48:17.899352  340990 command_runner.go:130] >       "repoTags": [
	I0229 01:48:17.899363  340990 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0229 01:48:17.899371  340990 command_runner.go:130] >       ],
	I0229 01:48:17.899379  340990 command_runner.go:130] >       "repoDigests": [
	I0229 01:48:17.899393  340990 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0229 01:48:17.899408  340990 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0229 01:48:17.899416  340990 command_runner.go:130] >       ],
	I0229 01:48:17.899424  340990 command_runner.go:130] >       "size": "750414",
	I0229 01:48:17.899433  340990 command_runner.go:130] >       "uid": {
	I0229 01:48:17.899440  340990 command_runner.go:130] >         "value": "65535"
	I0229 01:48:17.899449  340990 command_runner.go:130] >       },
	I0229 01:48:17.899457  340990 command_runner.go:130] >       "username": "",
	I0229 01:48:17.899465  340990 command_runner.go:130] >       "spec": null,
	I0229 01:48:17.899473  340990 command_runner.go:130] >       "pinned": true
	I0229 01:48:17.899481  340990 command_runner.go:130] >     }
	I0229 01:48:17.899486  340990 command_runner.go:130] >   ]
	I0229 01:48:17.899492  340990 command_runner.go:130] > }
	I0229 01:48:17.899640  340990 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 01:48:17.899654  340990 cache_images.go:84] Images are preloaded, skipping loading
	I0229 01:48:17.899738  340990 ssh_runner.go:195] Run: crio config
	I0229 01:48:17.934233  340990 command_runner.go:130] ! time="2024-02-29 01:48:17.927154755Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0229 01:48:17.940362  340990 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0229 01:48:17.944288  340990 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0229 01:48:17.944306  340990 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0229 01:48:17.944312  340990 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0229 01:48:17.944315  340990 command_runner.go:130] > #
	I0229 01:48:17.944322  340990 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0229 01:48:17.944328  340990 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0229 01:48:17.944334  340990 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0229 01:48:17.944367  340990 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0229 01:48:17.944378  340990 command_runner.go:130] > # reload'.
	I0229 01:48:17.944383  340990 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0229 01:48:17.944389  340990 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0229 01:48:17.944395  340990 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0229 01:48:17.944410  340990 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0229 01:48:17.944416  340990 command_runner.go:130] > [crio]
	I0229 01:48:17.944421  340990 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0229 01:48:17.944429  340990 command_runner.go:130] > # containers images, in this directory.
	I0229 01:48:17.944438  340990 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0229 01:48:17.944453  340990 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0229 01:48:17.944465  340990 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0229 01:48:17.944478  340990 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0229 01:48:17.944488  340990 command_runner.go:130] > # imagestore = ""
	I0229 01:48:17.944498  340990 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0229 01:48:17.944510  340990 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0229 01:48:17.944519  340990 command_runner.go:130] > storage_driver = "overlay"
	I0229 01:48:17.944528  340990 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0229 01:48:17.944537  340990 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0229 01:48:17.944541  340990 command_runner.go:130] > storage_option = [
	I0229 01:48:17.944548  340990 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0229 01:48:17.944551  340990 command_runner.go:130] > ]
	I0229 01:48:17.944562  340990 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0229 01:48:17.944574  340990 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0229 01:48:17.944581  340990 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0229 01:48:17.944594  340990 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0229 01:48:17.944606  340990 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0229 01:48:17.944616  340990 command_runner.go:130] > # always happen on a node reboot
	I0229 01:48:17.944627  340990 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0229 01:48:17.944646  340990 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0229 01:48:17.944659  340990 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0229 01:48:17.944670  340990 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0229 01:48:17.944682  340990 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0229 01:48:17.944698  340990 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0229 01:48:17.944715  340990 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0229 01:48:17.944725  340990 command_runner.go:130] > # internal_wipe = true
	I0229 01:48:17.944740  340990 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0229 01:48:17.944750  340990 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0229 01:48:17.944758  340990 command_runner.go:130] > # internal_repair = false
	I0229 01:48:17.944770  340990 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0229 01:48:17.944784  340990 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0229 01:48:17.944803  340990 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0229 01:48:17.944815  340990 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0229 01:48:17.944833  340990 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0229 01:48:17.944842  340990 command_runner.go:130] > [crio.api]
	I0229 01:48:17.944851  340990 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0229 01:48:17.944858  340990 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0229 01:48:17.944870  340990 command_runner.go:130] > # IP address on which the stream server will listen.
	I0229 01:48:17.944881  340990 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0229 01:48:17.944892  340990 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0229 01:48:17.944903  340990 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0229 01:48:17.944912  340990 command_runner.go:130] > # stream_port = "0"
	I0229 01:48:17.944930  340990 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0229 01:48:17.944941  340990 command_runner.go:130] > # stream_enable_tls = false
	I0229 01:48:17.944950  340990 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0229 01:48:17.944959  340990 command_runner.go:130] > # stream_idle_timeout = ""
	I0229 01:48:17.944973  340990 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0229 01:48:17.944990  340990 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0229 01:48:17.944999  340990 command_runner.go:130] > # minutes.
	I0229 01:48:17.945009  340990 command_runner.go:130] > # stream_tls_cert = ""
	I0229 01:48:17.945021  340990 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0229 01:48:17.945034  340990 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0229 01:48:17.945043  340990 command_runner.go:130] > # stream_tls_key = ""
	I0229 01:48:17.945053  340990 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0229 01:48:17.945063  340990 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0229 01:48:17.945095  340990 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0229 01:48:17.945106  340990 command_runner.go:130] > # stream_tls_ca = ""
	I0229 01:48:17.945118  340990 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0229 01:48:17.945128  340990 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0229 01:48:17.945143  340990 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0229 01:48:17.945152  340990 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0229 01:48:17.945165  340990 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0229 01:48:17.945178  340990 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0229 01:48:17.945185  340990 command_runner.go:130] > [crio.runtime]
	I0229 01:48:17.945191  340990 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0229 01:48:17.945203  340990 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0229 01:48:17.945213  340990 command_runner.go:130] > # "nofile=1024:2048"
	I0229 01:48:17.945229  340990 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0229 01:48:17.945239  340990 command_runner.go:130] > # default_ulimits = [
	I0229 01:48:17.945244  340990 command_runner.go:130] > # ]
	I0229 01:48:17.945257  340990 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0229 01:48:17.945266  340990 command_runner.go:130] > # no_pivot = false
	I0229 01:48:17.945277  340990 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0229 01:48:17.945286  340990 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0229 01:48:17.945295  340990 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0229 01:48:17.945307  340990 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0229 01:48:17.945319  340990 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0229 01:48:17.945333  340990 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0229 01:48:17.945344  340990 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0229 01:48:17.945353  340990 command_runner.go:130] > # Cgroup setting for conmon
	I0229 01:48:17.945366  340990 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0229 01:48:17.945376  340990 command_runner.go:130] > conmon_cgroup = "pod"
	I0229 01:48:17.945386  340990 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0229 01:48:17.945395  340990 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0229 01:48:17.945409  340990 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0229 01:48:17.945419  340990 command_runner.go:130] > conmon_env = [
	I0229 01:48:17.945429  340990 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0229 01:48:17.945437  340990 command_runner.go:130] > ]
	I0229 01:48:17.945447  340990 command_runner.go:130] > # Additional environment variables to set for all the
	I0229 01:48:17.945458  340990 command_runner.go:130] > # containers. These are overridden if set in the
	I0229 01:48:17.945470  340990 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0229 01:48:17.945478  340990 command_runner.go:130] > # default_env = [
	I0229 01:48:17.945487  340990 command_runner.go:130] > # ]
	I0229 01:48:17.945495  340990 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0229 01:48:17.945509  340990 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0229 01:48:17.945519  340990 command_runner.go:130] > # selinux = false
	I0229 01:48:17.945530  340990 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0229 01:48:17.945544  340990 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0229 01:48:17.945556  340990 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0229 01:48:17.945565  340990 command_runner.go:130] > # seccomp_profile = ""
	I0229 01:48:17.945577  340990 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0229 01:48:17.945588  340990 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0229 01:48:17.945598  340990 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0229 01:48:17.945615  340990 command_runner.go:130] > # which might increase security.
	I0229 01:48:17.945626  340990 command_runner.go:130] > # This option is currently deprecated,
	I0229 01:48:17.945636  340990 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0229 01:48:17.945647  340990 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0229 01:48:17.945660  340990 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0229 01:48:17.945673  340990 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0229 01:48:17.945686  340990 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0229 01:48:17.945696  340990 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0229 01:48:17.945704  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:48:17.945715  340990 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0229 01:48:17.945729  340990 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0229 01:48:17.945739  340990 command_runner.go:130] > # the cgroup blockio controller.
	I0229 01:48:17.945749  340990 command_runner.go:130] > # blockio_config_file = ""
	I0229 01:48:17.945762  340990 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0229 01:48:17.945772  340990 command_runner.go:130] > # blockio parameters.
	I0229 01:48:17.945781  340990 command_runner.go:130] > # blockio_reload = false
	I0229 01:48:17.945792  340990 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0229 01:48:17.945799  340990 command_runner.go:130] > # irqbalance daemon.
	I0229 01:48:17.945806  340990 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0229 01:48:17.945820  340990 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0229 01:48:17.945832  340990 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0229 01:48:17.945846  340990 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0229 01:48:17.945858  340990 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0229 01:48:17.945871  340990 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0229 01:48:17.945882  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:48:17.945890  340990 command_runner.go:130] > # rdt_config_file = ""
	I0229 01:48:17.945895  340990 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0229 01:48:17.945904  340990 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0229 01:48:17.945953  340990 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0229 01:48:17.945963  340990 command_runner.go:130] > # separate_pull_cgroup = ""
	I0229 01:48:17.945974  340990 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0229 01:48:17.945983  340990 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0229 01:48:17.945992  340990 command_runner.go:130] > # will be added.
	I0229 01:48:17.946002  340990 command_runner.go:130] > # default_capabilities = [
	I0229 01:48:17.946009  340990 command_runner.go:130] > # 	"CHOWN",
	I0229 01:48:17.946018  340990 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0229 01:48:17.946034  340990 command_runner.go:130] > # 	"FSETID",
	I0229 01:48:17.946044  340990 command_runner.go:130] > # 	"FOWNER",
	I0229 01:48:17.946053  340990 command_runner.go:130] > # 	"SETGID",
	I0229 01:48:17.946062  340990 command_runner.go:130] > # 	"SETUID",
	I0229 01:48:17.946070  340990 command_runner.go:130] > # 	"SETPCAP",
	I0229 01:48:17.946077  340990 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0229 01:48:17.946086  340990 command_runner.go:130] > # 	"KILL",
	I0229 01:48:17.946094  340990 command_runner.go:130] > # ]
	I0229 01:48:17.946108  340990 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0229 01:48:17.946122  340990 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0229 01:48:17.946133  340990 command_runner.go:130] > # add_inheritable_capabilities = false
	I0229 01:48:17.946155  340990 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0229 01:48:17.946167  340990 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0229 01:48:17.946174  340990 command_runner.go:130] > # default_sysctls = [
	I0229 01:48:17.946178  340990 command_runner.go:130] > # ]
	I0229 01:48:17.946188  340990 command_runner.go:130] > # List of devices on the host that a
	I0229 01:48:17.946205  340990 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0229 01:48:17.946216  340990 command_runner.go:130] > # allowed_devices = [
	I0229 01:48:17.946239  340990 command_runner.go:130] > # 	"/dev/fuse",
	I0229 01:48:17.946248  340990 command_runner.go:130] > # ]
	I0229 01:48:17.946256  340990 command_runner.go:130] > # List of additional devices. specified as
	I0229 01:48:17.946271  340990 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0229 01:48:17.946282  340990 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0229 01:48:17.946294  340990 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0229 01:48:17.946303  340990 command_runner.go:130] > # additional_devices = [
	I0229 01:48:17.946306  340990 command_runner.go:130] > # ]
	I0229 01:48:17.946313  340990 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0229 01:48:17.946322  340990 command_runner.go:130] > # cdi_spec_dirs = [
	I0229 01:48:17.946329  340990 command_runner.go:130] > # 	"/etc/cdi",
	I0229 01:48:17.946335  340990 command_runner.go:130] > # 	"/var/run/cdi",
	I0229 01:48:17.946344  340990 command_runner.go:130] > # ]
	I0229 01:48:17.946353  340990 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0229 01:48:17.946367  340990 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0229 01:48:17.946376  340990 command_runner.go:130] > # Defaults to false.
	I0229 01:48:17.946384  340990 command_runner.go:130] > # device_ownership_from_security_context = false
	I0229 01:48:17.946396  340990 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0229 01:48:17.946412  340990 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0229 01:48:17.946422  340990 command_runner.go:130] > # hooks_dir = [
	I0229 01:48:17.946433  340990 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0229 01:48:17.946437  340990 command_runner.go:130] > # ]
	I0229 01:48:17.946450  340990 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0229 01:48:17.946464  340990 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0229 01:48:17.946475  340990 command_runner.go:130] > # its default mounts from the following two files:
	I0229 01:48:17.946482  340990 command_runner.go:130] > #
	I0229 01:48:17.946492  340990 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0229 01:48:17.946502  340990 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0229 01:48:17.946511  340990 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0229 01:48:17.946520  340990 command_runner.go:130] > #
	I0229 01:48:17.946531  340990 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0229 01:48:17.946545  340990 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0229 01:48:17.946557  340990 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0229 01:48:17.946568  340990 command_runner.go:130] > #      only add mounts it finds in this file.
	I0229 01:48:17.946576  340990 command_runner.go:130] > #
	I0229 01:48:17.946585  340990 command_runner.go:130] > # default_mounts_file = ""
	I0229 01:48:17.946598  340990 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0229 01:48:17.946611  340990 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0229 01:48:17.946621  340990 command_runner.go:130] > pids_limit = 1024
	I0229 01:48:17.946632  340990 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0229 01:48:17.946645  340990 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0229 01:48:17.946658  340990 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0229 01:48:17.946673  340990 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0229 01:48:17.946683  340990 command_runner.go:130] > # log_size_max = -1
	I0229 01:48:17.946693  340990 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0229 01:48:17.946702  340990 command_runner.go:130] > # log_to_journald = false
	I0229 01:48:17.946715  340990 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0229 01:48:17.946726  340990 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0229 01:48:17.946735  340990 command_runner.go:130] > # Path to directory for container attach sockets.
	I0229 01:48:17.946745  340990 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0229 01:48:17.946756  340990 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0229 01:48:17.946767  340990 command_runner.go:130] > # bind_mount_prefix = ""
	I0229 01:48:17.946778  340990 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0229 01:48:17.946788  340990 command_runner.go:130] > # read_only = false
	I0229 01:48:17.946803  340990 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0229 01:48:17.946816  340990 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0229 01:48:17.946827  340990 command_runner.go:130] > # live configuration reload.
	I0229 01:48:17.946838  340990 command_runner.go:130] > # log_level = "info"
	I0229 01:48:17.946850  340990 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0229 01:48:17.946861  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:48:17.946869  340990 command_runner.go:130] > # log_filter = ""
	I0229 01:48:17.946881  340990 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0229 01:48:17.946899  340990 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0229 01:48:17.946908  340990 command_runner.go:130] > # separated by comma.
	I0229 01:48:17.946928  340990 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 01:48:17.946938  340990 command_runner.go:130] > # uid_mappings = ""
	I0229 01:48:17.946948  340990 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0229 01:48:17.946960  340990 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0229 01:48:17.946970  340990 command_runner.go:130] > # separated by comma.
	I0229 01:48:17.946984  340990 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 01:48:17.946994  340990 command_runner.go:130] > # gid_mappings = ""
	I0229 01:48:17.947006  340990 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0229 01:48:17.947014  340990 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0229 01:48:17.947026  340990 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0229 01:48:17.947046  340990 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 01:48:17.947056  340990 command_runner.go:130] > # minimum_mappable_uid = -1
	I0229 01:48:17.947068  340990 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0229 01:48:17.947081  340990 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0229 01:48:17.947093  340990 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0229 01:48:17.947105  340990 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 01:48:17.947112  340990 command_runner.go:130] > # minimum_mappable_gid = -1
	I0229 01:48:17.947121  340990 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0229 01:48:17.947134  340990 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0229 01:48:17.947147  340990 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0229 01:48:17.947157  340990 command_runner.go:130] > # ctr_stop_timeout = 30
	I0229 01:48:17.947169  340990 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0229 01:48:17.947181  340990 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0229 01:48:17.947192  340990 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0229 01:48:17.947200  340990 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0229 01:48:17.947208  340990 command_runner.go:130] > drop_infra_ctr = false
	I0229 01:48:17.947225  340990 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0229 01:48:17.947237  340990 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0229 01:48:17.947252  340990 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0229 01:48:17.947262  340990 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0229 01:48:17.947276  340990 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0229 01:48:17.947288  340990 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0229 01:48:17.947300  340990 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0229 01:48:17.947308  340990 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0229 01:48:17.947319  340990 command_runner.go:130] > # shared_cpuset = ""
	I0229 01:48:17.947332  340990 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0229 01:48:17.947343  340990 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0229 01:48:17.947353  340990 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0229 01:48:17.947366  340990 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0229 01:48:17.947376  340990 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0229 01:48:17.947388  340990 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0229 01:48:17.947397  340990 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0229 01:48:17.947406  340990 command_runner.go:130] > # enable_criu_support = false
	I0229 01:48:17.947417  340990 command_runner.go:130] > # Enable/disable the generation of the container,
	I0229 01:48:17.947430  340990 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0229 01:48:17.947442  340990 command_runner.go:130] > # enable_pod_events = false
	I0229 01:48:17.947455  340990 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0229 01:48:17.947467  340990 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0229 01:48:17.947479  340990 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0229 01:48:17.947488  340990 command_runner.go:130] > # default_runtime = "runc"
	I0229 01:48:17.947499  340990 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0229 01:48:17.947515  340990 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0229 01:48:17.947533  340990 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0229 01:48:17.947543  340990 command_runner.go:130] > # creation as a file is not desired either.
	I0229 01:48:17.947559  340990 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0229 01:48:17.947569  340990 command_runner.go:130] > # the hostname is being managed dynamically.
	I0229 01:48:17.947577  340990 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0229 01:48:17.947581  340990 command_runner.go:130] > # ]
	I0229 01:48:17.947590  340990 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0229 01:48:17.947603  340990 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0229 01:48:17.947616  340990 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0229 01:48:17.947627  340990 command_runner.go:130] > # Each entry in the table should follow the format:
	I0229 01:48:17.947643  340990 command_runner.go:130] > #
	I0229 01:48:17.947653  340990 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0229 01:48:17.947662  340990 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0229 01:48:17.947669  340990 command_runner.go:130] > # runtime_type = "oci"
	I0229 01:48:17.947729  340990 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0229 01:48:17.947741  340990 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0229 01:48:17.947751  340990 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0229 01:48:17.947761  340990 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0229 01:48:17.947770  340990 command_runner.go:130] > # monitor_env = []
	I0229 01:48:17.947781  340990 command_runner.go:130] > # privileged_without_host_devices = false
	I0229 01:48:17.947790  340990 command_runner.go:130] > # allowed_annotations = []
	I0229 01:48:17.947802  340990 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0229 01:48:17.947812  340990 command_runner.go:130] > # Where:
	I0229 01:48:17.947818  340990 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0229 01:48:17.947832  340990 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0229 01:48:17.947846  340990 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0229 01:48:17.947858  340990 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0229 01:48:17.947868  340990 command_runner.go:130] > #   in $PATH.
	I0229 01:48:17.947881  340990 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0229 01:48:17.947891  340990 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0229 01:48:17.947903  340990 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0229 01:48:17.947913  340990 command_runner.go:130] > #   state.
	I0229 01:48:17.947929  340990 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0229 01:48:17.947942  340990 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0229 01:48:17.947956  340990 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0229 01:48:17.947968  340990 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0229 01:48:17.947981  340990 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0229 01:48:17.947994  340990 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0229 01:48:17.948003  340990 command_runner.go:130] > #   The currently recognized values are:
	I0229 01:48:17.948013  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0229 01:48:17.948028  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0229 01:48:17.948041  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0229 01:48:17.948054  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0229 01:48:17.948069  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0229 01:48:17.948082  340990 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0229 01:48:17.948095  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0229 01:48:17.948113  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0229 01:48:17.948127  340990 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0229 01:48:17.948141  340990 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0229 01:48:17.948152  340990 command_runner.go:130] > #   deprecated option "conmon".
	I0229 01:48:17.948166  340990 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0229 01:48:17.948177  340990 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0229 01:48:17.948190  340990 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0229 01:48:17.948201  340990 command_runner.go:130] > #   should be moved to the container's cgroup
	I0229 01:48:17.948208  340990 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0229 01:48:17.948213  340990 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0229 01:48:17.948223  340990 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0229 01:48:17.948231  340990 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0229 01:48:17.948236  340990 command_runner.go:130] > #
	I0229 01:48:17.948243  340990 command_runner.go:130] > # Using the seccomp notifier feature:
	I0229 01:48:17.948247  340990 command_runner.go:130] > #
	I0229 01:48:17.948256  340990 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0229 01:48:17.948265  340990 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0229 01:48:17.948270  340990 command_runner.go:130] > #
	I0229 01:48:17.948280  340990 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0229 01:48:17.948292  340990 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0229 01:48:17.948295  340990 command_runner.go:130] > #
	I0229 01:48:17.948302  340990 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0229 01:48:17.948311  340990 command_runner.go:130] > # feature.
	I0229 01:48:17.948320  340990 command_runner.go:130] > #
	I0229 01:48:17.948329  340990 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0229 01:48:17.948343  340990 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0229 01:48:17.948356  340990 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0229 01:48:17.948368  340990 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0229 01:48:17.948381  340990 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0229 01:48:17.948388  340990 command_runner.go:130] > #
	I0229 01:48:17.948394  340990 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0229 01:48:17.948405  340990 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0229 01:48:17.948413  340990 command_runner.go:130] > #
	I0229 01:48:17.948423  340990 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0229 01:48:17.948435  340990 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0229 01:48:17.948442  340990 command_runner.go:130] > #
	I0229 01:48:17.948457  340990 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0229 01:48:17.948469  340990 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0229 01:48:17.948477  340990 command_runner.go:130] > # limitation.
	I0229 01:48:17.948485  340990 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0229 01:48:17.948493  340990 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0229 01:48:17.948503  340990 command_runner.go:130] > runtime_type = "oci"
	I0229 01:48:17.948513  340990 command_runner.go:130] > runtime_root = "/run/runc"
	I0229 01:48:17.948523  340990 command_runner.go:130] > runtime_config_path = ""
	I0229 01:48:17.948533  340990 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0229 01:48:17.948543  340990 command_runner.go:130] > monitor_cgroup = "pod"
	I0229 01:48:17.948552  340990 command_runner.go:130] > monitor_exec_cgroup = ""
	I0229 01:48:17.948561  340990 command_runner.go:130] > monitor_env = [
	I0229 01:48:17.948569  340990 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0229 01:48:17.948574  340990 command_runner.go:130] > ]
	I0229 01:48:17.948583  340990 command_runner.go:130] > privileged_without_host_devices = false
	I0229 01:48:17.948597  340990 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0229 01:48:17.948609  340990 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0229 01:48:17.948622  340990 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0229 01:48:17.948637  340990 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0229 01:48:17.948652  340990 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0229 01:48:17.948662  340990 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0229 01:48:17.948676  340990 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0229 01:48:17.948696  340990 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0229 01:48:17.948709  340990 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0229 01:48:17.948723  340990 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0229 01:48:17.948731  340990 command_runner.go:130] > # Example:
	I0229 01:48:17.948741  340990 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0229 01:48:17.948751  340990 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0229 01:48:17.948760  340990 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0229 01:48:17.948771  340990 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0229 01:48:17.948780  340990 command_runner.go:130] > # cpuset = 0
	I0229 01:48:17.948787  340990 command_runner.go:130] > # cpushares = "0-1"
	I0229 01:48:17.948796  340990 command_runner.go:130] > # Where:
	I0229 01:48:17.948804  340990 command_runner.go:130] > # The workload name is workload-type.
	I0229 01:48:17.948818  340990 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0229 01:48:17.948830  340990 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0229 01:48:17.948848  340990 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0229 01:48:17.948858  340990 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0229 01:48:17.948871  340990 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0229 01:48:17.948882  340990 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0229 01:48:17.948896  340990 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0229 01:48:17.948906  340990 command_runner.go:130] > # Default value is set to true
	I0229 01:48:17.948921  340990 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0229 01:48:17.948933  340990 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0229 01:48:17.948944  340990 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0229 01:48:17.948952  340990 command_runner.go:130] > # Default value is set to 'false'
	I0229 01:48:17.948960  340990 command_runner.go:130] > # disable_hostport_mapping = false
	I0229 01:48:17.948981  340990 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0229 01:48:17.948990  340990 command_runner.go:130] > #
	I0229 01:48:17.948999  340990 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0229 01:48:17.949012  340990 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0229 01:48:17.949025  340990 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0229 01:48:17.949038  340990 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0229 01:48:17.949050  340990 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0229 01:48:17.949058  340990 command_runner.go:130] > [crio.image]
	I0229 01:48:17.949065  340990 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0229 01:48:17.949074  340990 command_runner.go:130] > # default_transport = "docker://"
	I0229 01:48:17.949087  340990 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0229 01:48:17.949103  340990 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0229 01:48:17.949113  340990 command_runner.go:130] > # global_auth_file = ""
	I0229 01:48:17.949123  340990 command_runner.go:130] > # The image used to instantiate infra containers.
	I0229 01:48:17.949133  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:48:17.949144  340990 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0229 01:48:17.949152  340990 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0229 01:48:17.949165  340990 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0229 01:48:17.949176  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:48:17.949189  340990 command_runner.go:130] > # pause_image_auth_file = ""
	I0229 01:48:17.949201  340990 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0229 01:48:17.949213  340990 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0229 01:48:17.949226  340990 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0229 01:48:17.949237  340990 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0229 01:48:17.949245  340990 command_runner.go:130] > # pause_command = "/pause"
	I0229 01:48:17.949263  340990 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0229 01:48:17.949277  340990 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0229 01:48:17.949291  340990 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0229 01:48:17.949303  340990 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0229 01:48:17.949317  340990 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0229 01:48:17.949329  340990 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0229 01:48:17.949338  340990 command_runner.go:130] > # pinned_images = [
	I0229 01:48:17.949344  340990 command_runner.go:130] > # ]
	I0229 01:48:17.949352  340990 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0229 01:48:17.949366  340990 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0229 01:48:17.949378  340990 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0229 01:48:17.949392  340990 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0229 01:48:17.949404  340990 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0229 01:48:17.949413  340990 command_runner.go:130] > # signature_policy = ""
	I0229 01:48:17.949424  340990 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0229 01:48:17.949438  340990 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0229 01:48:17.949447  340990 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0229 01:48:17.949458  340990 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0229 01:48:17.949472  340990 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0229 01:48:17.949480  340990 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0229 01:48:17.949493  340990 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0229 01:48:17.949509  340990 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0229 01:48:17.949517  340990 command_runner.go:130] > # changing them here.
	I0229 01:48:17.949527  340990 command_runner.go:130] > # insecure_registries = [
	I0229 01:48:17.949535  340990 command_runner.go:130] > # ]
	I0229 01:48:17.949544  340990 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0229 01:48:17.949555  340990 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0229 01:48:17.949565  340990 command_runner.go:130] > # image_volumes = "mkdir"
	I0229 01:48:17.949577  340990 command_runner.go:130] > # Temporary directory to use for storing big files
	I0229 01:48:17.949587  340990 command_runner.go:130] > # big_files_temporary_dir = ""
	I0229 01:48:17.949599  340990 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0229 01:48:17.949608  340990 command_runner.go:130] > # CNI plugins.
	I0229 01:48:17.949617  340990 command_runner.go:130] > [crio.network]
	I0229 01:48:17.949626  340990 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0229 01:48:17.949634  340990 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0229 01:48:17.949642  340990 command_runner.go:130] > # cni_default_network = ""
	I0229 01:48:17.949660  340990 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0229 01:48:17.949671  340990 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0229 01:48:17.949683  340990 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0229 01:48:17.949692  340990 command_runner.go:130] > # plugin_dirs = [
	I0229 01:48:17.949701  340990 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0229 01:48:17.949710  340990 command_runner.go:130] > # ]
	I0229 01:48:17.949718  340990 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0229 01:48:17.949724  340990 command_runner.go:130] > [crio.metrics]
	I0229 01:48:17.949736  340990 command_runner.go:130] > # Globally enable or disable metrics support.
	I0229 01:48:17.949745  340990 command_runner.go:130] > enable_metrics = true
	I0229 01:48:17.949757  340990 command_runner.go:130] > # Specify enabled metrics collectors.
	I0229 01:48:17.949767  340990 command_runner.go:130] > # Per default all metrics are enabled.
	I0229 01:48:17.949779  340990 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0229 01:48:17.949791  340990 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0229 01:48:17.949803  340990 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0229 01:48:17.949811  340990 command_runner.go:130] > # metrics_collectors = [
	I0229 01:48:17.949815  340990 command_runner.go:130] > # 	"operations",
	I0229 01:48:17.949822  340990 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0229 01:48:17.949833  340990 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0229 01:48:17.949843  340990 command_runner.go:130] > # 	"operations_errors",
	I0229 01:48:17.949852  340990 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0229 01:48:17.949861  340990 command_runner.go:130] > # 	"image_pulls_by_name",
	I0229 01:48:17.949871  340990 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0229 01:48:17.949880  340990 command_runner.go:130] > # 	"image_pulls_failures",
	I0229 01:48:17.949889  340990 command_runner.go:130] > # 	"image_pulls_successes",
	I0229 01:48:17.949897  340990 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0229 01:48:17.949903  340990 command_runner.go:130] > # 	"image_layer_reuse",
	I0229 01:48:17.949911  340990 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0229 01:48:17.949925  340990 command_runner.go:130] > # 	"containers_oom_total",
	I0229 01:48:17.949938  340990 command_runner.go:130] > # 	"containers_oom",
	I0229 01:48:17.949947  340990 command_runner.go:130] > # 	"processes_defunct",
	I0229 01:48:17.949956  340990 command_runner.go:130] > # 	"operations_total",
	I0229 01:48:17.949966  340990 command_runner.go:130] > # 	"operations_latency_seconds",
	I0229 01:48:17.949976  340990 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0229 01:48:17.949986  340990 command_runner.go:130] > # 	"operations_errors_total",
	I0229 01:48:17.949993  340990 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0229 01:48:17.950008  340990 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0229 01:48:17.950019  340990 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0229 01:48:17.950030  340990 command_runner.go:130] > # 	"image_pulls_success_total",
	I0229 01:48:17.950044  340990 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0229 01:48:17.950054  340990 command_runner.go:130] > # 	"containers_oom_count_total",
	I0229 01:48:17.950064  340990 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0229 01:48:17.950074  340990 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0229 01:48:17.950081  340990 command_runner.go:130] > # ]
	I0229 01:48:17.950087  340990 command_runner.go:130] > # The port on which the metrics server will listen.
	I0229 01:48:17.950094  340990 command_runner.go:130] > # metrics_port = 9090
	I0229 01:48:17.950102  340990 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0229 01:48:17.950111  340990 command_runner.go:130] > # metrics_socket = ""
	I0229 01:48:17.950124  340990 command_runner.go:130] > # The certificate for the secure metrics server.
	I0229 01:48:17.950137  340990 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0229 01:48:17.950149  340990 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0229 01:48:17.950160  340990 command_runner.go:130] > # certificate on any modification event.
	I0229 01:48:17.950169  340990 command_runner.go:130] > # metrics_cert = ""
	I0229 01:48:17.950177  340990 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0229 01:48:17.950187  340990 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0229 01:48:17.950197  340990 command_runner.go:130] > # metrics_key = ""
	I0229 01:48:17.950209  340990 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0229 01:48:17.950218  340990 command_runner.go:130] > [crio.tracing]
	I0229 01:48:17.950235  340990 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0229 01:48:17.950245  340990 command_runner.go:130] > # enable_tracing = false
	I0229 01:48:17.950254  340990 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0229 01:48:17.950265  340990 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0229 01:48:17.950277  340990 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0229 01:48:17.950289  340990 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0229 01:48:17.950299  340990 command_runner.go:130] > # CRI-O NRI configuration.
	I0229 01:48:17.950307  340990 command_runner.go:130] > [crio.nri]
	I0229 01:48:17.950316  340990 command_runner.go:130] > # Globally enable or disable NRI.
	I0229 01:48:17.950320  340990 command_runner.go:130] > # enable_nri = false
	I0229 01:48:17.950327  340990 command_runner.go:130] > # NRI socket to listen on.
	I0229 01:48:17.950338  340990 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0229 01:48:17.950348  340990 command_runner.go:130] > # NRI plugin directory to use.
	I0229 01:48:17.950356  340990 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0229 01:48:17.950374  340990 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0229 01:48:17.950385  340990 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0229 01:48:17.950399  340990 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0229 01:48:17.950409  340990 command_runner.go:130] > # nri_disable_connections = false
	I0229 01:48:17.950417  340990 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0229 01:48:17.950426  340990 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0229 01:48:17.950436  340990 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0229 01:48:17.950447  340990 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0229 01:48:17.950457  340990 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0229 01:48:17.950467  340990 command_runner.go:130] > [crio.stats]
	I0229 01:48:17.950479  340990 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0229 01:48:17.950490  340990 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0229 01:48:17.950500  340990 command_runner.go:130] > # stats_collection_period = 0
	I0229 01:48:17.950641  340990 cni.go:84] Creating CNI manager for ""
	I0229 01:48:17.950653  340990 cni.go:136] 3 nodes found, recommending kindnet
	I0229 01:48:17.950698  340990 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 01:48:17.950727  340990 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-107035 NodeName:multinode-107035 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 01:48:17.950903  340990 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-107035"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:48:17.951000  340990 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-107035 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-107035 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:48:17.951068  340990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 01:48:17.961400  340990 command_runner.go:130] > kubeadm
	I0229 01:48:17.961415  340990 command_runner.go:130] > kubectl
	I0229 01:48:17.961419  340990 command_runner.go:130] > kubelet
	I0229 01:48:17.961612  340990 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:48:17.961682  340990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 01:48:17.971859  340990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0229 01:48:17.990075  340990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 01:48:18.008557  340990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0229 01:48:18.029308  340990 ssh_runner.go:195] Run: grep 192.168.39.183	control-plane.minikube.internal$ /etc/hosts
	I0229 01:48:18.033956  340990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:48:18.047603  340990 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035 for IP: 192.168.39.183
	I0229 01:48:18.047635  340990 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:48:18.047812  340990 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 01:48:18.047872  340990 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 01:48:18.047965  340990 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.key
	I0229 01:48:18.048072  340990 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/apiserver.key.a2b84326
	I0229 01:48:18.048116  340990 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/proxy-client.key
	I0229 01:48:18.048125  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 01:48:18.048138  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 01:48:18.048158  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 01:48:18.048178  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 01:48:18.048192  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 01:48:18.048206  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 01:48:18.048217  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 01:48:18.048229  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 01:48:18.048303  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 01:48:18.048351  340990 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 01:48:18.048367  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 01:48:18.048397  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 01:48:18.048425  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:48:18.048464  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 01:48:18.048512  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 01:48:18.048559  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem -> /usr/share/ca-certificates/323885.pem
	I0229 01:48:18.048580  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> /usr/share/ca-certificates/3238852.pem
	I0229 01:48:18.048599  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:48:18.049234  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 01:48:18.079456  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 01:48:18.109756  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 01:48:18.139109  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 01:48:18.166807  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:48:18.194793  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 01:48:18.222472  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:48:18.249786  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 01:48:18.276837  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 01:48:18.304559  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 01:48:18.333899  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:48:18.361610  340990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 01:48:18.381034  340990 ssh_runner.go:195] Run: openssl version
	I0229 01:48:18.387225  340990 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 01:48:18.387318  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:48:18.399153  340990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:48:18.403976  340990 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:48:18.404184  340990 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:48:18.404248  340990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:48:18.410608  340990 command_runner.go:130] > b5213941
	I0229 01:48:18.410700  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:48:18.422457  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 01:48:18.434374  340990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 01:48:18.439451  340990 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 01:48:18.439552  340990 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 01:48:18.439609  340990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 01:48:18.446015  340990 command_runner.go:130] > 51391683
	I0229 01:48:18.446090  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 01:48:18.458025  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 01:48:18.469801  340990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 01:48:18.474990  340990 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 01:48:18.475020  340990 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 01:48:18.475072  340990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 01:48:18.481206  340990 command_runner.go:130] > 3ec20f2e
	I0229 01:48:18.481275  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 01:48:18.493040  340990 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:48:18.497928  340990 command_runner.go:130] > ca.crt
	I0229 01:48:18.497945  340990 command_runner.go:130] > ca.key
	I0229 01:48:18.497952  340990 command_runner.go:130] > healthcheck-client.crt
	I0229 01:48:18.497958  340990 command_runner.go:130] > healthcheck-client.key
	I0229 01:48:18.497964  340990 command_runner.go:130] > peer.crt
	I0229 01:48:18.497969  340990 command_runner.go:130] > peer.key
	I0229 01:48:18.497975  340990 command_runner.go:130] > server.crt
	I0229 01:48:18.497981  340990 command_runner.go:130] > server.key
	I0229 01:48:18.498100  340990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 01:48:18.504469  340990 command_runner.go:130] > Certificate will not expire
	I0229 01:48:18.504779  340990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 01:48:18.511016  340990 command_runner.go:130] > Certificate will not expire
	I0229 01:48:18.511098  340990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 01:48:18.517318  340990 command_runner.go:130] > Certificate will not expire
	I0229 01:48:18.517481  340990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 01:48:18.523651  340990 command_runner.go:130] > Certificate will not expire
	I0229 01:48:18.523951  340990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 01:48:18.530075  340990 command_runner.go:130] > Certificate will not expire
	I0229 01:48:18.530188  340990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 01:48:18.536287  340990 command_runner.go:130] > Certificate will not expire
	I0229 01:48:18.536525  340990 kubeadm.go:404] StartCluster: {Name:multinode-107035 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-107035 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:48:18.536698  340990 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 01:48:18.536762  340990 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 01:48:18.578203  340990 cri.go:89] found id: ""
	I0229 01:48:18.578302  340990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 01:48:18.591423  340990 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0229 01:48:18.591454  340990 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0229 01:48:18.591464  340990 command_runner.go:130] > /var/lib/minikube/etcd:
	I0229 01:48:18.591469  340990 command_runner.go:130] > member
	I0229 01:48:18.591517  340990 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 01:48:18.591562  340990 kubeadm.go:636] restartCluster start
	I0229 01:48:18.591633  340990 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 01:48:18.603354  340990 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:18.603929  340990 kubeconfig.go:92] found "multinode-107035" server: "https://192.168.39.183:8443"
	I0229 01:48:18.604356  340990 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:48:18.604615  340990 kapi.go:59] client config for multinode-107035: &rest.Config{Host:"https://192.168.39.183:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.key", CAFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 01:48:18.605187  340990 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 01:48:18.605533  340990 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 01:48:18.616843  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:18.616896  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:18.631063  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:19.117729  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:19.117820  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:19.132954  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:19.617671  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:19.617752  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:19.632937  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:20.117550  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:20.117644  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:20.131949  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:20.617533  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:20.617661  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:20.631400  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:21.116940  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:21.117030  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:21.131168  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:21.617774  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:21.617920  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:21.632019  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:22.117165  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:22.117254  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:22.130802  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:22.616884  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:22.616976  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:22.630407  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:23.117012  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:23.117093  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:23.130485  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:23.616968  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:23.617073  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:23.632321  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:24.117915  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:24.118009  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:24.131056  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:24.617900  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:24.618009  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:24.631800  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:25.117316  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:25.117396  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:25.131024  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:25.617586  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:25.617711  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:25.631655  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:26.117183  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:26.117300  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:26.130947  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:26.617514  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:26.617593  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:26.630964  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:27.117603  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:27.117710  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:27.131039  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:27.617163  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:27.617291  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:27.630919  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:28.117535  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:28.117649  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:28.132330  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:28.617076  340990 api_server.go:166] Checking apiserver status ...
	I0229 01:48:28.617171  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:48:28.630356  340990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:48:28.630387  340990 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 01:48:28.630429  340990 kubeadm.go:1135] stopping kube-system containers ...
	I0229 01:48:28.630448  340990 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 01:48:28.630514  340990 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 01:48:28.669825  340990 cri.go:89] found id: ""
	I0229 01:48:28.669904  340990 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 01:48:28.691103  340990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:48:28.703221  340990 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0229 01:48:28.703244  340990 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0229 01:48:28.703251  340990 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0229 01:48:28.703259  340990 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:48:28.703472  340990 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:48:28.703527  340990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:48:28.713656  340990 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 01:48:28.713675  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:48:28.827134  340990 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:48:28.827496  340990 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0229 01:48:28.827891  340990 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0229 01:48:28.828316  340990 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:48:28.829919  340990 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0229 01:48:28.830343  340990 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:48:28.831061  340990 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0229 01:48:28.831504  340990 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0229 01:48:28.831951  340990 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:48:28.832336  340990 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:48:28.832785  340990 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:48:28.833409  340990 command_runner.go:130] > [certs] Using the existing "sa" key
	I0229 01:48:28.834602  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:48:29.653560  340990 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:48:29.653588  340990 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:48:29.653597  340990 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:48:29.653602  340990 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:48:29.653612  340990 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:48:29.653904  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:48:29.863573  340990 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:48:29.863619  340990 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:48:29.863628  340990 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 01:48:29.863661  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:48:29.934045  340990 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:48:29.934081  340990 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:48:29.934091  340990 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:48:29.934127  340990 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:48:29.934163  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:48:30.001906  340990 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:48:30.005668  340990 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:48:30.005765  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:48:30.505924  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:48:31.006120  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:48:31.024129  340990 command_runner.go:130] > 1061
	I0229 01:48:31.024198  340990 api_server.go:72] duration metric: took 1.018534417s to wait for apiserver process to appear ...
	I0229 01:48:31.024228  340990 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:48:31.024252  340990 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0229 01:48:31.024853  340990 api_server.go:269] stopped: https://192.168.39.183:8443/healthz: Get "https://192.168.39.183:8443/healthz": dial tcp 192.168.39.183:8443: connect: connection refused
	I0229 01:48:31.524356  340990 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0229 01:48:34.436484  340990 api_server.go:279] https://192.168.39.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 01:48:34.436514  340990 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 01:48:34.436529  340990 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0229 01:48:34.530837  340990 api_server.go:279] https://192.168.39.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 01:48:34.530866  340990 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 01:48:34.530880  340990 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0229 01:48:34.542482  340990 api_server.go:279] https://192.168.39.183:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 01:48:34.542509  340990 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 01:48:35.025104  340990 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0229 01:48:35.031171  340990 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 01:48:35.031199  340990 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:48:35.524789  340990 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0229 01:48:35.533524  340990 api_server.go:279] https://192.168.39.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 01:48:35.533551  340990 api_server.go:103] status: https://192.168.39.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:48:36.025185  340990 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0229 01:48:36.031250  340990 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0229 01:48:36.031354  340990 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I0229 01:48:36.031371  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:36.031380  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:36.031386  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:36.040321  340990 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 01:48:36.040349  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:36.040369  340990 round_trippers.go:580]     Audit-Id: def688fc-e3d3-4226-b81d-c74a311f1a4f
	I0229 01:48:36.040374  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:36.040381  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:36.040387  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:36.040391  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:36.040394  340990 round_trippers.go:580]     Content-Length: 264
	I0229 01:48:36.040398  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:36 GMT
	I0229 01:48:36.040426  340990 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 01:48:36.040526  340990 api_server.go:141] control plane version: v1.28.4
	I0229 01:48:36.040552  340990 api_server.go:131] duration metric: took 5.016316395s to wait for apiserver health ...
	I0229 01:48:36.040563  340990 cni.go:84] Creating CNI manager for ""
	I0229 01:48:36.040570  340990 cni.go:136] 3 nodes found, recommending kindnet
	I0229 01:48:36.042341  340990 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 01:48:36.043702  340990 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 01:48:36.049427  340990 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 01:48:36.049448  340990 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 01:48:36.049454  340990 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 01:48:36.049461  340990 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 01:48:36.049465  340990 command_runner.go:130] > Access: 2024-02-29 01:48:05.172185555 +0000
	I0229 01:48:36.049470  340990 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 01:48:36.049475  340990 command_runner.go:130] > Change: 2024-02-29 01:48:03.809050024 +0000
	I0229 01:48:36.049481  340990 command_runner.go:130] >  Birth: -
	I0229 01:48:36.049614  340990 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 01:48:36.049631  340990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 01:48:36.074290  340990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 01:48:37.116325  340990 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 01:48:37.116355  340990 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 01:48:37.116365  340990 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 01:48:37.116373  340990 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 01:48:37.116457  340990 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.042128057s)
	I0229 01:48:37.116527  340990 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:48:37.116700  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I0229 01:48:37.116713  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.116724  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.116733  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.120620  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:37.120638  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.120647  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.120652  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.120657  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.120662  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.120665  340990 round_trippers.go:580]     Audit-Id: 3b6e1ad3-5071-4781-af7b-3473bd663ec2
	I0229 01:48:37.120668  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.122459  340990 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"780"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"741","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82839 chars]
	I0229 01:48:37.126555  340990 system_pods.go:59] 12 kube-system pods found
	I0229 01:48:37.126598  340990 system_pods.go:61] "coredns-5dd5756b68-5fqf2" [2730e330-16ca-4b2d-a5dc-330ff37ab57e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 01:48:37.126614  340990 system_pods.go:61] "etcd-multinode-107035" [65255c97-af0a-4233-b308-e46dfd75a9f9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 01:48:37.126627  340990 system_pods.go:61] "kindnet-g9fbr" [31f24411-2b54-422d-873f-5826bdb2139a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 01:48:37.126637  340990 system_pods.go:61] "kindnet-hfz2n" [3ba1ea9a-17be-421b-b430-21e867586927] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 01:48:37.126651  340990 system_pods.go:61] "kindnet-tqzhh" [ccf5ad9d-f1ce-41d5-9d35-43618107f5c8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 01:48:37.126673  340990 system_pods.go:61] "kube-apiserver-multinode-107035" [c8a5ad6e-c2cc-49a4-8837-ba1b280f87af] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 01:48:37.126686  340990 system_pods.go:61] "kube-controller-manager-multinode-107035" [cc34d9e0-d4bd-4fac-8c94-6ead8a744abc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 01:48:37.126695  340990 system_pods.go:61] "kube-proxy-2vt7v" [eaa78334-8191-47e9-b001-343c90a87460] Running
	I0229 01:48:37.126702  340990 system_pods.go:61] "kube-proxy-7vhtd" [1a552ea7-1d99-46ec-99e1-30ad4ac72ca8] Running
	I0229 01:48:37.126708  340990 system_pods.go:61] "kube-proxy-fhzft" [3b05cd87-92a9-4c59-879a-d42c3a08c7d4] Running
	I0229 01:48:37.126718  340990 system_pods.go:61] "kube-scheduler-multinode-107035" [ac9bc04a-dac0-40f5-b928-4cacd028df82] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 01:48:37.126727  340990 system_pods.go:61] "storage-provisioner" [d83d7986-be05-4caf-bec9-ef577b473d77] Running
	I0229 01:48:37.126736  340990 system_pods.go:74] duration metric: took 10.196505ms to wait for pod list to return data ...
	I0229 01:48:37.126748  340990 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:48:37.126818  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I0229 01:48:37.126828  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.126838  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.126845  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.129457  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:37.129478  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.129488  340990 round_trippers.go:580]     Audit-Id: 9ef082c1-a978-4843-bd36-0d3d2ac02ece
	I0229 01:48:37.129493  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.129499  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.129506  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.129511  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.129516  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.130034  340990 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"780"},"items":[{"metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"691","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16475 chars]
	I0229 01:48:37.130926  340990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:48:37.130961  340990 node_conditions.go:123] node cpu capacity is 2
	I0229 01:48:37.130983  340990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:48:37.130989  340990 node_conditions.go:123] node cpu capacity is 2
	I0229 01:48:37.130997  340990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:48:37.131001  340990 node_conditions.go:123] node cpu capacity is 2
	I0229 01:48:37.131010  340990 node_conditions.go:105] duration metric: took 4.252773ms to run NodePressure ...
	I0229 01:48:37.131042  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:48:37.303209  340990 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0229 01:48:37.368210  340990 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0229 01:48:37.370061  340990 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 01:48:37.370173  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0229 01:48:37.370186  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.370197  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.370207  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.373002  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:37.373019  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.373029  340990 round_trippers.go:580]     Audit-Id: e39121dd-3f56-4686-8ab3-644240a4c4d9
	I0229 01:48:37.373034  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.373039  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.373042  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.373045  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.373050  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.373637  340990 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"785"},"items":[{"metadata":{"name":"etcd-multinode-107035","namespace":"kube-system","uid":"65255c97-af0a-4233-b308-e46dfd75a9f9","resourceVersion":"739","creationTimestamp":"2024-02-29T01:38:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.183:2379","kubernetes.io/config.hash":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.mirror":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.seen":"2024-02-29T01:38:16.621157329Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28735 chars]
	I0229 01:48:37.374681  340990 kubeadm.go:787] kubelet initialised
	I0229 01:48:37.374708  340990 kubeadm.go:788] duration metric: took 4.623496ms waiting for restarted kubelet to initialise ...
	I0229 01:48:37.374719  340990 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:48:37.374786  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I0229 01:48:37.374795  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.374808  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.374813  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.381789  340990 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 01:48:37.381812  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.381822  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.381827  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.381832  340990 round_trippers.go:580]     Audit-Id: 2ecb6f7a-d2d1-4935-9d71-5d8f3969d487
	I0229 01:48:37.381835  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.381839  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.381842  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.382848  340990 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"785"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"741","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82839 chars]
	I0229 01:48:37.386101  340990 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:37.386200  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5fqf2
	I0229 01:48:37.386210  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.386217  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.386221  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.390352  340990 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 01:48:37.390368  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.390374  340990 round_trippers.go:580]     Audit-Id: 6294319b-df4c-49c5-b635-f835a49e6713
	I0229 01:48:37.390379  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.390381  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.390384  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.390387  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.390390  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.391537  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"741","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 01:48:37.391952  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:37.391968  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.391975  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.391987  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.395780  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:37.395797  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.395806  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.395813  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.395817  340990 round_trippers.go:580]     Audit-Id: 3ab2d766-718a-455e-8699-c94fdddbeeb1
	I0229 01:48:37.395822  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.395826  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.395831  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.396145  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"691","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 01:48:37.396456  340990 pod_ready.go:97] node "multinode-107035" hosting pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107035" has status "Ready":"False"
	I0229 01:48:37.396480  340990 pod_ready.go:81] duration metric: took 10.353991ms waiting for pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace to be "Ready" ...
	E0229 01:48:37.396491  340990 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107035" hosting pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107035" has status "Ready":"False"
	I0229 01:48:37.396501  340990 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:37.396557  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107035
	I0229 01:48:37.396575  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.396585  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.396593  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.399104  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:37.399126  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.399135  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.399141  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.399144  340990 round_trippers.go:580]     Audit-Id: 5ea51074-bf2a-41b3-99cd-55cb5d8b3ea1
	I0229 01:48:37.399162  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.399174  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.399177  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.399346  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107035","namespace":"kube-system","uid":"65255c97-af0a-4233-b308-e46dfd75a9f9","resourceVersion":"739","creationTimestamp":"2024-02-29T01:38:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.183:2379","kubernetes.io/config.hash":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.mirror":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.seen":"2024-02-29T01:38:16.621157329Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6049 chars]
	I0229 01:48:37.399654  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:37.399664  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.399671  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.399676  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.403547  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:37.403566  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.403574  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.403581  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.403589  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.403596  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.403599  340990 round_trippers.go:580]     Audit-Id: 5a00c8df-3d38-4d23-9eab-996cab3e15ab
	I0229 01:48:37.403602  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.404128  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"691","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 01:48:37.404413  340990 pod_ready.go:97] node "multinode-107035" hosting pod "etcd-multinode-107035" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107035" has status "Ready":"False"
	I0229 01:48:37.404437  340990 pod_ready.go:81] duration metric: took 7.929042ms waiting for pod "etcd-multinode-107035" in "kube-system" namespace to be "Ready" ...
	E0229 01:48:37.404445  340990 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107035" hosting pod "etcd-multinode-107035" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107035" has status "Ready":"False"
	I0229 01:48:37.404457  340990 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:37.404511  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-107035
	I0229 01:48:37.404519  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.404525  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.404530  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.407152  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:37.407165  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.407170  340990 round_trippers.go:580]     Audit-Id: 4f8da137-b96e-4f00-86dd-66734114c884
	I0229 01:48:37.407188  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.407196  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.407201  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.407205  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.407210  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.407521  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-107035","namespace":"kube-system","uid":"c8a5ad6e-c2cc-49a4-8837-ba1b280f87af","resourceVersion":"733","creationTimestamp":"2024-02-29T01:38:23Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.183:8443","kubernetes.io/config.hash":"f8e3f19840dda0faee1ad3a91ae482c1","kubernetes.io/config.mirror":"f8e3f19840dda0faee1ad3a91ae482c1","kubernetes.io/config.seen":"2024-02-29T01:38:16.621158531Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7595 chars]
	I0229 01:48:37.407947  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:37.407963  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.407969  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.407975  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.410277  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:37.410292  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.410298  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.410302  340990 round_trippers.go:580]     Audit-Id: 18cc0b9c-a61c-4119-8c60-035959074e21
	I0229 01:48:37.410304  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.410308  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.410311  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.410314  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.410480  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"691","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 01:48:37.410767  340990 pod_ready.go:97] node "multinode-107035" hosting pod "kube-apiserver-multinode-107035" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107035" has status "Ready":"False"
	I0229 01:48:37.410786  340990 pod_ready.go:81] duration metric: took 6.324471ms waiting for pod "kube-apiserver-multinode-107035" in "kube-system" namespace to be "Ready" ...
	E0229 01:48:37.410794  340990 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107035" hosting pod "kube-apiserver-multinode-107035" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107035" has status "Ready":"False"
	I0229 01:48:37.410810  340990 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:37.410851  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-107035
	I0229 01:48:37.410859  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.410865  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.410869  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.412888  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:37.412901  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.412906  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.412910  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.412913  340990 round_trippers.go:580]     Audit-Id: 22e66f45-6c21-4e25-8e76-fa075d221021
	I0229 01:48:37.412916  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.412920  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.412923  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.413249  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-107035","namespace":"kube-system","uid":"cc34d9e0-d4bd-4fac-8c94-6ead8a744abc","resourceVersion":"740","creationTimestamp":"2024-02-29T01:38:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d885436ac2f1544135b29b38fb6816fc","kubernetes.io/config.mirror":"d885436ac2f1544135b29b38fb6816fc","kubernetes.io/config.seen":"2024-02-29T01:38:23.684826383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7169 chars]
	I0229 01:48:37.516798  340990 request.go:629] Waited for 103.052613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:37.516866  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:37.516872  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.516879  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.516884  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.519151  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:37.519179  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.519190  340990 round_trippers.go:580]     Audit-Id: a5cedc34-9484-43f0-9798-a45f1e087ed6
	I0229 01:48:37.519195  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.519199  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.519203  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.519209  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.519213  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.519372  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"691","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 01:48:37.519882  340990 pod_ready.go:97] node "multinode-107035" hosting pod "kube-controller-manager-multinode-107035" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107035" has status "Ready":"False"
	I0229 01:48:37.519914  340990 pod_ready.go:81] duration metric: took 109.096127ms waiting for pod "kube-controller-manager-multinode-107035" in "kube-system" namespace to be "Ready" ...
	E0229 01:48:37.519928  340990 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107035" hosting pod "kube-controller-manager-multinode-107035" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107035" has status "Ready":"False"
	I0229 01:48:37.519943  340990 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2vt7v" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:37.717412  340990 request.go:629] Waited for 197.36244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vt7v
	I0229 01:48:37.717483  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vt7v
	I0229 01:48:37.717491  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.717501  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.717508  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.720312  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:37.720337  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.720347  340990 round_trippers.go:580]     Audit-Id: 3676c823-6374-4d75-af10-d01bc19db676
	I0229 01:48:37.720351  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.720356  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.720362  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.720367  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.720373  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.720551  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2vt7v","generateName":"kube-proxy-","namespace":"kube-system","uid":"eaa78334-8191-47e9-b001-343c90a87460","resourceVersion":"466","creationTimestamp":"2024-02-29T01:39:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02144a11-c41b-4c40-be0e-44f538bad496","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:39:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02144a11-c41b-4c40-be0e-44f538bad496\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5488 chars]
	I0229 01:48:37.917467  340990 request.go:629] Waited for 196.388091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m02
	I0229 01:48:37.917562  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m02
	I0229 01:48:37.917571  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:37.917583  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:37.917591  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:37.920734  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:37.920761  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:37.920770  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:37.920777  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:37.920782  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:37.920786  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:37.920790  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:37 GMT
	I0229 01:48:37.920794  340990 round_trippers.go:580]     Audit-Id: ad7df809-402c-4e46-8b6d-23fe07631b75
	I0229 01:48:37.921213  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035-m02","uid":"ce7e14a9-031d-40ba-b40d-27d557da3a03","resourceVersion":"771","creationTimestamp":"2024-02-29T01:39:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T01_40_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:39:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0229 01:48:37.921543  340990 pod_ready.go:92] pod "kube-proxy-2vt7v" in "kube-system" namespace has status "Ready":"True"
	I0229 01:48:37.921564  340990 pod_ready.go:81] duration metric: took 401.611395ms waiting for pod "kube-proxy-2vt7v" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:37.921582  340990 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7vhtd" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:38.117627  340990 request.go:629] Waited for 195.970691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7vhtd
	I0229 01:48:38.117713  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7vhtd
	I0229 01:48:38.117719  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:38.117727  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:38.117731  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:38.120802  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:38.120836  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:38.120847  340990 round_trippers.go:580]     Audit-Id: 9fb1c745-00a0-4462-8ed7-b6e85a3a4d98
	I0229 01:48:38.120851  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:38.120857  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:38.120861  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:38.120865  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:38.120872  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:38 GMT
	I0229 01:48:38.121333  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7vhtd","generateName":"kube-proxy-","namespace":"kube-system","uid":"1a552ea7-1d99-46ec-99e1-30ad4ac72ca8","resourceVersion":"775","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02144a11-c41b-4c40-be0e-44f538bad496","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02144a11-c41b-4c40-be0e-44f538bad496\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5484 chars]
	I0229 01:48:38.317254  340990 request.go:629] Waited for 195.381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:38.317331  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:38.317337  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:38.317344  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:38.317350  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:38.319565  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:38.319586  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:38.319596  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:38.319604  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:38.319610  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:38.319615  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:38.319620  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:38 GMT
	I0229 01:48:38.319624  340990 round_trippers.go:580]     Audit-Id: 75e96c20-6160-44a5-93ed-840c9b504bed
	I0229 01:48:38.319838  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"691","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 01:48:38.320264  340990 pod_ready.go:97] node "multinode-107035" hosting pod "kube-proxy-7vhtd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107035" has status "Ready":"False"
	I0229 01:48:38.320293  340990 pod_ready.go:81] duration metric: took 398.700989ms waiting for pod "kube-proxy-7vhtd" in "kube-system" namespace to be "Ready" ...
	E0229 01:48:38.320304  340990 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107035" hosting pod "kube-proxy-7vhtd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107035" has status "Ready":"False"
	I0229 01:48:38.320328  340990 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fhzft" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:38.517205  340990 request.go:629] Waited for 196.802392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fhzft
	I0229 01:48:38.517320  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fhzft
	I0229 01:48:38.517331  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:38.517339  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:38.517346  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:38.520708  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:38.520727  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:38.520735  340990 round_trippers.go:580]     Audit-Id: 973055d3-7fb8-48c7-b69c-ccfcce3189c0
	I0229 01:48:38.520740  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:38.520744  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:38.520747  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:38.520751  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:38.520753  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:38 GMT
	I0229 01:48:38.520894  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fhzft","generateName":"kube-proxy-","namespace":"kube-system","uid":"3b05cd87-92a9-4c59-879a-d42c3a08c7d4","resourceVersion":"669","creationTimestamp":"2024-02-29T01:40:04Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02144a11-c41b-4c40-be0e-44f538bad496","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:40:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02144a11-c41b-4c40-be0e-44f538bad496\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5492 chars]
	I0229 01:48:38.717215  340990 request.go:629] Waited for 195.785669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m03
	I0229 01:48:38.717287  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m03
	I0229 01:48:38.717294  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:38.717304  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:38.717308  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:38.719681  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:38.719704  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:38.719712  340990 round_trippers.go:580]     Audit-Id: 1a50cb90-876b-4984-8738-0171754c6898
	I0229 01:48:38.719717  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:38.719728  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:38.719732  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:38.719736  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:38.719740  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:38 GMT
	I0229 01:48:38.719921  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035-m03","uid":"7068367c-f5dd-4a1d-bba4-904a860289cd","resourceVersion":"750","creationTimestamp":"2024-02-29T01:40:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T01_40_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 4085 chars]
	I0229 01:48:38.720299  340990 pod_ready.go:92] pod "kube-proxy-fhzft" in "kube-system" namespace has status "Ready":"True"
	I0229 01:48:38.720319  340990 pod_ready.go:81] duration metric: took 399.978369ms waiting for pod "kube-proxy-fhzft" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:38.720339  340990 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:38.917300  340990 request.go:629] Waited for 196.887874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107035
	I0229 01:48:38.917398  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107035
	I0229 01:48:38.917404  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:38.917411  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:38.917415  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:38.920103  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:38.920123  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:38.920129  340990 round_trippers.go:580]     Audit-Id: d80befa1-057e-4db7-816a-9b23e06237cf
	I0229 01:48:38.920132  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:38.920134  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:38.920137  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:38.920140  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:38.920143  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:38 GMT
	I0229 01:48:38.920315  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-107035","namespace":"kube-system","uid":"ac9bc04a-dac0-40f5-b928-4cacd028df82","resourceVersion":"738","creationTimestamp":"2024-02-29T01:38:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ef2538a195901383d6f1be68d27ee2ba","kubernetes.io/config.mirror":"ef2538a195901383d6f1be68d27ee2ba","kubernetes.io/config.seen":"2024-02-29T01:38:23.684827179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4890 chars]
	I0229 01:48:39.117103  340990 request.go:629] Waited for 196.359962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:39.117176  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:39.117184  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:39.117202  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:39.117209  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:39.126731  340990 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0229 01:48:39.126812  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:39.127240  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:39.127262  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:39.127275  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:39.127287  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:39.127310  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:39 GMT
	I0229 01:48:39.127325  340990 round_trippers.go:580]     Audit-Id: ccec9e79-85fd-42e4-8538-1e2766751965
	I0229 01:48:39.127655  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"691","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 01:48:39.128120  340990 pod_ready.go:97] node "multinode-107035" hosting pod "kube-scheduler-multinode-107035" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107035" has status "Ready":"False"
	I0229 01:48:39.128156  340990 pod_ready.go:81] duration metric: took 407.807154ms waiting for pod "kube-scheduler-multinode-107035" in "kube-system" namespace to be "Ready" ...
	E0229 01:48:39.128170  340990 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107035" hosting pod "kube-scheduler-multinode-107035" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107035" has status "Ready":"False"
	I0229 01:48:39.128189  340990 pod_ready.go:38] duration metric: took 1.753461128s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:48:39.128218  340990 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 01:48:39.178511  340990 command_runner.go:130] > -16
	I0229 01:48:39.178564  340990 ops.go:34] apiserver oom_adj: -16
	I0229 01:48:39.178587  340990 kubeadm.go:640] restartCluster took 20.587001136s
	I0229 01:48:39.178602  340990 kubeadm.go:406] StartCluster complete in 20.642084493s
	I0229 01:48:39.178621  340990 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:48:39.178699  340990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:48:39.179322  340990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:48:39.231946  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 01:48:39.232023  340990 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 01:48:39.232322  340990 config.go:182] Loaded profile config "multinode-107035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:48:39.266453  340990 out.go:177] * Enabled addons: 
	I0229 01:48:39.232393  340990 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:48:39.267880  340990 addons.go:505] enable addons completed in 35.854215ms: enabled=[]
	I0229 01:48:39.268130  340990 kapi.go:59] client config for multinode-107035: &rest.Config{Host:"https://192.168.39.183:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.key", CAFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 01:48:39.268456  340990 round_trippers.go:463] GET https://192.168.39.183:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 01:48:39.268467  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:39.268475  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:39.268479  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:39.271225  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:39.271259  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:39.271269  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:39.271275  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:39.271279  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:39.271284  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:39.271289  340990 round_trippers.go:580]     Content-Length: 291
	I0229 01:48:39.271293  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:39 GMT
	I0229 01:48:39.271299  340990 round_trippers.go:580]     Audit-Id: 00d18f3a-a9f2-4bf9-99c0-7fbb7e8570f2
	I0229 01:48:39.271373  340990 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"886475f9-4800-446f-81db-efbd75717fab","resourceVersion":"784","creationTimestamp":"2024-02-29T01:38:23Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 01:48:39.271536  340990 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-107035" context rescaled to 1 replicas
	I0229 01:48:39.271571  340990 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 01:48:39.273011  340990 out.go:177] * Verifying Kubernetes components...
	I0229 01:48:39.274219  340990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:48:39.420285  340990 command_runner.go:130] > apiVersion: v1
	I0229 01:48:39.420308  340990 command_runner.go:130] > data:
	I0229 01:48:39.420313  340990 command_runner.go:130] >   Corefile: |
	I0229 01:48:39.420332  340990 command_runner.go:130] >     .:53 {
	I0229 01:48:39.420336  340990 command_runner.go:130] >         log
	I0229 01:48:39.420340  340990 command_runner.go:130] >         errors
	I0229 01:48:39.420344  340990 command_runner.go:130] >         health {
	I0229 01:48:39.420348  340990 command_runner.go:130] >            lameduck 5s
	I0229 01:48:39.420352  340990 command_runner.go:130] >         }
	I0229 01:48:39.420356  340990 command_runner.go:130] >         ready
	I0229 01:48:39.420361  340990 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0229 01:48:39.420365  340990 command_runner.go:130] >            pods insecure
	I0229 01:48:39.420374  340990 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0229 01:48:39.420378  340990 command_runner.go:130] >            ttl 30
	I0229 01:48:39.420381  340990 command_runner.go:130] >         }
	I0229 01:48:39.420385  340990 command_runner.go:130] >         prometheus :9153
	I0229 01:48:39.420389  340990 command_runner.go:130] >         hosts {
	I0229 01:48:39.420394  340990 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0229 01:48:39.420402  340990 command_runner.go:130] >            fallthrough
	I0229 01:48:39.420405  340990 command_runner.go:130] >         }
	I0229 01:48:39.420410  340990 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0229 01:48:39.420424  340990 command_runner.go:130] >            max_concurrent 1000
	I0229 01:48:39.420432  340990 command_runner.go:130] >         }
	I0229 01:48:39.420438  340990 command_runner.go:130] >         cache 30
	I0229 01:48:39.420448  340990 command_runner.go:130] >         loop
	I0229 01:48:39.420453  340990 command_runner.go:130] >         reload
	I0229 01:48:39.420459  340990 command_runner.go:130] >         loadbalance
	I0229 01:48:39.420466  340990 command_runner.go:130] >     }
	I0229 01:48:39.420471  340990 command_runner.go:130] > kind: ConfigMap
	I0229 01:48:39.420476  340990 command_runner.go:130] > metadata:
	I0229 01:48:39.420485  340990 command_runner.go:130] >   creationTimestamp: "2024-02-29T01:38:23Z"
	I0229 01:48:39.420490  340990 command_runner.go:130] >   name: coredns
	I0229 01:48:39.420505  340990 command_runner.go:130] >   namespace: kube-system
	I0229 01:48:39.420510  340990 command_runner.go:130] >   resourceVersion: "357"
	I0229 01:48:39.420517  340990 command_runner.go:130] >   uid: 31e6e40d-a1ce-487c-9d67-e368cb03961e
	I0229 01:48:39.422490  340990 node_ready.go:35] waiting up to 6m0s for node "multinode-107035" to be "Ready" ...
	I0229 01:48:39.422592  340990 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 01:48:39.422650  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:39.422664  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:39.422674  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:39.422682  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:39.424835  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:39.424855  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:39.424864  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:39.424871  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:39.424876  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:39.424880  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:39 GMT
	I0229 01:48:39.424885  340990 round_trippers.go:580]     Audit-Id: 62294c1b-c6a6-4a05-b451-2719c2c9ce64
	I0229 01:48:39.424888  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:39.425188  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"691","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 01:48:39.923421  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:39.923444  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:39.923453  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:39.923465  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:39.925880  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:39.925906  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:39.925922  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:39.925925  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:39.925927  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:39.925934  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:39.925939  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:39 GMT
	I0229 01:48:39.925941  340990 round_trippers.go:580]     Audit-Id: d2c8830e-edc0-4bf2-8858-bf775cc695cd
	I0229 01:48:39.926514  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"691","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 01:48:40.423208  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:40.423240  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:40.423251  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:40.423258  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:40.426253  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:40.426270  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:40.426277  340990 round_trippers.go:580]     Audit-Id: 86b8cf5d-33b6-4286-b9c7-b3f0485e2822
	I0229 01:48:40.426282  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:40.426285  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:40.426288  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:40.426292  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:40.426294  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:40 GMT
	I0229 01:48:40.426468  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"691","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0229 01:48:40.923086  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:40.923114  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:40.923122  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:40.923128  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:40.925748  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:40.925773  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:40.925782  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:40.925786  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:40.925790  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:40 GMT
	I0229 01:48:40.925794  340990 round_trippers.go:580]     Audit-Id: 8a6c24ea-a649-44a9-9106-b112639d0b79
	I0229 01:48:40.925797  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:40.925800  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:40.926052  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:40.926451  340990 node_ready.go:49] node "multinode-107035" has status "Ready":"True"
	I0229 01:48:40.926474  340990 node_ready.go:38] duration metric: took 1.50395315s waiting for node "multinode-107035" to be "Ready" ...
	I0229 01:48:40.926485  340990 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:48:40.926537  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I0229 01:48:40.926546  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:40.926553  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:40.926556  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:40.930056  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:40.930080  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:40.930089  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:40.930095  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:40.930099  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:40 GMT
	I0229 01:48:40.930105  340990 round_trippers.go:580]     Audit-Id: 323bcb45-4759-4866-86de-a0e7c8ae6bc8
	I0229 01:48:40.930109  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:40.930113  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:40.931979  340990 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"813"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"741","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82555 chars]
	I0229 01:48:40.934388  340990 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:40.934461  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5fqf2
	I0229 01:48:40.934469  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:40.934476  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:40.934481  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:40.936332  340990 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 01:48:40.936351  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:40.936360  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:40.936366  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:40 GMT
	I0229 01:48:40.936383  340990 round_trippers.go:580]     Audit-Id: 5803df33-5020-4572-a0e1-6b013e1d1229
	I0229 01:48:40.936393  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:40.936397  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:40.936400  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:40.936739  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"741","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 01:48:40.937155  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:40.937175  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:40.937184  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:40.937191  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:40.938996  340990 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 01:48:40.939014  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:40.939023  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:40.939028  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:40 GMT
	I0229 01:48:40.939034  340990 round_trippers.go:580]     Audit-Id: f804ba6a-fefd-4cb9-b191-392f7e7cbe3d
	I0229 01:48:40.939041  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:40.939046  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:40.939053  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:40.939279  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:41.434934  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5fqf2
	I0229 01:48:41.434969  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:41.434982  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:41.434987  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:41.437393  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:41.437419  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:41.437428  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:41.437433  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:41.437437  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:41 GMT
	I0229 01:48:41.437449  340990 round_trippers.go:580]     Audit-Id: e6b709f4-ab70-417f-adf7-6bd5f0b7be87
	I0229 01:48:41.437455  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:41.437462  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:41.437839  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"741","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 01:48:41.438331  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:41.438354  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:41.438361  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:41.438365  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:41.440358  340990 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 01:48:41.440380  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:41.440389  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:41.440394  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:41.440400  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:41.440404  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:41 GMT
	I0229 01:48:41.440408  340990 round_trippers.go:580]     Audit-Id: a2d173f8-cdbe-4638-82b3-04d02e320e9b
	I0229 01:48:41.440417  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:41.440810  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:41.935549  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5fqf2
	I0229 01:48:41.935583  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:41.935594  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:41.935600  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:41.938761  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:41.938791  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:41.938800  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:41.938805  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:41.938811  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:41 GMT
	I0229 01:48:41.938817  340990 round_trippers.go:580]     Audit-Id: 99b39f0a-f3f5-49e6-8a11-7cbbfacc3e39
	I0229 01:48:41.938820  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:41.938824  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:41.939041  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"741","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 01:48:41.939652  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:41.939677  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:41.939688  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:41.939701  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:41.941800  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:41.941820  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:41.941830  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:41.941834  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:41.941840  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:41.941844  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:41 GMT
	I0229 01:48:41.941848  340990 round_trippers.go:580]     Audit-Id: be75bdfd-8a3c-42bd-a373-d2dd50890bfa
	I0229 01:48:41.941853  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:41.942040  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:42.434708  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5fqf2
	I0229 01:48:42.434735  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:42.434743  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:42.434749  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:42.437324  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:42.437350  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:42.437358  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:42 GMT
	I0229 01:48:42.437363  340990 round_trippers.go:580]     Audit-Id: 5671ebb9-f124-4530-964f-6afeeb097047
	I0229 01:48:42.437366  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:42.437369  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:42.437376  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:42.437379  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:42.438042  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"741","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 01:48:42.438666  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:42.438687  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:42.438697  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:42.438705  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:42.440733  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:42.440755  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:42.440764  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:42.440769  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:42 GMT
	I0229 01:48:42.440773  340990 round_trippers.go:580]     Audit-Id: 55fba803-c168-4187-b77a-31d78a463a30
	I0229 01:48:42.440785  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:42.440789  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:42.440793  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:42.440973  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:42.934990  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5fqf2
	I0229 01:48:42.935033  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:42.935042  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:42.935045  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:42.937530  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:42.937557  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:42.937566  340990 round_trippers.go:580]     Audit-Id: 66d5800b-b0f9-4c43-b380-cd5860333b5f
	I0229 01:48:42.937570  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:42.937574  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:42.937577  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:42.937580  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:42.937583  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:42 GMT
	I0229 01:48:42.938040  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"741","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0229 01:48:42.938519  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:42.938538  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:42.938545  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:42.938549  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:42.940653  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:42.940677  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:42.940686  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:42.940691  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:42.940695  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:42.940700  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:42.940704  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:42 GMT
	I0229 01:48:42.940709  340990 round_trippers.go:580]     Audit-Id: ea1d3831-e4f2-4807-a6d6-b40cee877b8e
	I0229 01:48:42.940900  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:42.941179  340990 pod_ready.go:102] pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace has status "Ready":"False"
	I0229 01:48:43.435621  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5fqf2
	I0229 01:48:43.435645  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:43.435652  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:43.435656  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:43.438633  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:43.438658  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:43.438668  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:43.438678  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:43.438684  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:43.438692  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:43.438696  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:43 GMT
	I0229 01:48:43.438700  340990 round_trippers.go:580]     Audit-Id: 001fc86a-09ca-40b5-bc86-d81a94329ae6
	I0229 01:48:43.438976  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"819","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6455 chars]
	I0229 01:48:43.439507  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:43.439524  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:43.439532  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:43.439539  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:43.441766  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:43.441781  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:43.441787  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:43.441791  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:43.441800  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:43 GMT
	I0229 01:48:43.441805  340990 round_trippers.go:580]     Audit-Id: 75269286-d3d4-4a1f-9f15-87e4b65672d9
	I0229 01:48:43.441808  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:43.441814  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:43.442175  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:43.934768  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5fqf2
	I0229 01:48:43.934809  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:43.934820  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:43.934824  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:43.937799  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:43.937831  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:43.937841  340990 round_trippers.go:580]     Audit-Id: b3330e7b-8ef9-4c24-bd03-07e7d76ed6ec
	I0229 01:48:43.937848  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:43.937852  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:43.937855  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:43.937859  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:43.937863  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:43 GMT
	I0229 01:48:43.938204  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"819","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6455 chars]
	I0229 01:48:43.938669  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:43.938686  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:43.938692  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:43.938695  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:43.941448  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:43.941463  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:43.941469  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:43.941473  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:43.941477  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:43.941481  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:43 GMT
	I0229 01:48:43.941486  340990 round_trippers.go:580]     Audit-Id: 481ec135-98df-4442-865a-1179d47ffecf
	I0229 01:48:43.941490  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:43.941634  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:44.435203  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5fqf2
	I0229 01:48:44.435225  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:44.435236  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:44.435242  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:44.439520  340990 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 01:48:44.439541  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:44.439548  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:44.439552  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:44.439555  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:44.439557  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:44.439559  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:44 GMT
	I0229 01:48:44.439561  340990 round_trippers.go:580]     Audit-Id: 47ab2857-783a-4775-877a-4bb58928ee48
	I0229 01:48:44.439757  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"820","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6226 chars]
	I0229 01:48:44.440193  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:44.440205  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:44.440212  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:44.440217  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:44.442336  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:44.442354  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:44.442361  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:44.442370  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:44.442374  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:44 GMT
	I0229 01:48:44.442378  340990 round_trippers.go:580]     Audit-Id: 652fc0be-8d8d-41ce-8c53-d77d52e9fc1f
	I0229 01:48:44.442382  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:44.442386  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:44.442591  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:44.442887  340990 pod_ready.go:92] pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace has status "Ready":"True"
	I0229 01:48:44.442903  340990 pod_ready.go:81] duration metric: took 3.508495392s waiting for pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:44.442911  340990 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:44.442960  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107035
	I0229 01:48:44.442968  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:44.442976  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:44.442983  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:44.445119  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:44.445137  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:44.445143  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:44.445146  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:44.445149  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:44.445152  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:44 GMT
	I0229 01:48:44.445154  340990 round_trippers.go:580]     Audit-Id: 188ec4ba-3363-4467-82f5-15c63a91911e
	I0229 01:48:44.445157  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:44.445524  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107035","namespace":"kube-system","uid":"65255c97-af0a-4233-b308-e46dfd75a9f9","resourceVersion":"739","creationTimestamp":"2024-02-29T01:38:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.183:2379","kubernetes.io/config.hash":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.mirror":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.seen":"2024-02-29T01:38:16.621157329Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6049 chars]
	I0229 01:48:44.446042  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:44.446060  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:44.446069  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:44.446072  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:44.448882  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:44.448896  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:44.448902  340990 round_trippers.go:580]     Audit-Id: de90de5d-2985-4a92-9420-b1e6142d25e7
	I0229 01:48:44.448906  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:44.448909  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:44.448912  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:44.448914  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:44.448916  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:44 GMT
	I0229 01:48:44.449090  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:44.943204  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107035
	I0229 01:48:44.943226  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:44.943237  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:44.943242  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:44.945510  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:44.945527  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:44.945533  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:44.945537  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:44.945547  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:44.945551  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:44.945553  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:44 GMT
	I0229 01:48:44.945556  340990 round_trippers.go:580]     Audit-Id: 10bce944-1c7f-4c9a-b1f0-cda327a3c658
	I0229 01:48:44.945743  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107035","namespace":"kube-system","uid":"65255c97-af0a-4233-b308-e46dfd75a9f9","resourceVersion":"739","creationTimestamp":"2024-02-29T01:38:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.183:2379","kubernetes.io/config.hash":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.mirror":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.seen":"2024-02-29T01:38:16.621157329Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6049 chars]
	I0229 01:48:44.946112  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:44.946124  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:44.946130  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:44.946133  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:44.948148  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:44.948162  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:44.948168  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:44.948172  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:44.948176  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:44 GMT
	I0229 01:48:44.948180  340990 round_trippers.go:580]     Audit-Id: a9eff3ad-8319-42c6-8d4e-0235270cd17d
	I0229 01:48:44.948183  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:44.948187  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:44.948535  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:45.443217  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107035
	I0229 01:48:45.443243  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:45.443252  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:45.443257  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:45.445860  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:45.445879  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:45.445885  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:45.445888  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:45.445892  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:45 GMT
	I0229 01:48:45.445897  340990 round_trippers.go:580]     Audit-Id: 1f6d42dd-d9b8-4b47-8838-23f735233ca5
	I0229 01:48:45.445901  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:45.445905  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:45.446089  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107035","namespace":"kube-system","uid":"65255c97-af0a-4233-b308-e46dfd75a9f9","resourceVersion":"739","creationTimestamp":"2024-02-29T01:38:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.183:2379","kubernetes.io/config.hash":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.mirror":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.seen":"2024-02-29T01:38:16.621157329Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6049 chars]
	I0229 01:48:45.446495  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:45.446509  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:45.446516  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:45.446520  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:45.451465  340990 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 01:48:45.451490  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:45.451497  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:45.451502  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:45.451505  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:45 GMT
	I0229 01:48:45.451510  340990 round_trippers.go:580]     Audit-Id: fbc0bdae-4cfa-4d06-8d9c-e3d155745c3f
	I0229 01:48:45.451514  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:45.451518  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:45.451929  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:45.943215  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107035
	I0229 01:48:45.943248  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:45.943260  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:45.943267  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:45.945939  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:45.945953  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:45.945959  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:45.945962  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:45.945965  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:45.945968  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:45 GMT
	I0229 01:48:45.945972  340990 round_trippers.go:580]     Audit-Id: ceacb5cf-c9e4-42af-8432-e95b1eed6e20
	I0229 01:48:45.945974  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:45.946179  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107035","namespace":"kube-system","uid":"65255c97-af0a-4233-b308-e46dfd75a9f9","resourceVersion":"739","creationTimestamp":"2024-02-29T01:38:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.183:2379","kubernetes.io/config.hash":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.mirror":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.seen":"2024-02-29T01:38:16.621157329Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6049 chars]
	I0229 01:48:45.946599  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:45.946611  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:45.946619  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:45.946622  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:45.949721  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:45.949744  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:45.949754  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:45.949760  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:45.949764  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:45 GMT
	I0229 01:48:45.949767  340990 round_trippers.go:580]     Audit-Id: 61e83549-03d1-4257-9d74-a7815b5f6383
	I0229 01:48:45.949769  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:45.949772  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:45.949936  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:46.443545  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107035
	I0229 01:48:46.443580  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:46.443592  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:46.443600  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:46.446277  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:46.446296  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:46.446303  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:46.446307  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:46.446310  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:46.446313  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:46.446316  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:46 GMT
	I0229 01:48:46.446325  340990 round_trippers.go:580]     Audit-Id: 9558413a-6da7-403a-9ec9-798073cfc4be
	I0229 01:48:46.446853  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107035","namespace":"kube-system","uid":"65255c97-af0a-4233-b308-e46dfd75a9f9","resourceVersion":"739","creationTimestamp":"2024-02-29T01:38:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.183:2379","kubernetes.io/config.hash":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.mirror":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.seen":"2024-02-29T01:38:16.621157329Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6049 chars]
	I0229 01:48:46.447387  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:46.447408  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:46.447415  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:46.447421  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:46.449499  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:46.449521  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:46.449530  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:46.449535  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:46.449541  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:46.449545  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:46 GMT
	I0229 01:48:46.449549  340990 round_trippers.go:580]     Audit-Id: 746d6341-4054-4e6f-b5ff-9c6ffdfc3e4b
	I0229 01:48:46.449554  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:46.449800  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:46.450090  340990 pod_ready.go:102] pod "etcd-multinode-107035" in "kube-system" namespace has status "Ready":"False"
	I0229 01:48:46.943483  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107035
	I0229 01:48:46.943508  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:46.943516  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:46.943519  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:46.945985  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:46.946008  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:46.946018  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:46.946025  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:46.946031  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:46.946037  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:46 GMT
	I0229 01:48:46.946040  340990 round_trippers.go:580]     Audit-Id: c01d7ed5-6b7f-4292-8d7f-71b21d37d768
	I0229 01:48:46.946044  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:46.946667  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107035","namespace":"kube-system","uid":"65255c97-af0a-4233-b308-e46dfd75a9f9","resourceVersion":"739","creationTimestamp":"2024-02-29T01:38:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.183:2379","kubernetes.io/config.hash":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.mirror":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.seen":"2024-02-29T01:38:16.621157329Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6049 chars]
	I0229 01:48:46.947056  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:46.947070  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:46.947077  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:46.947080  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:46.949029  340990 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 01:48:46.949050  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:46.949058  340990 round_trippers.go:580]     Audit-Id: 727e6358-c652-461f-aaf0-80f74febc199
	I0229 01:48:46.949064  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:46.949072  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:46.949075  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:46.949078  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:46.949081  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:46 GMT
	I0229 01:48:46.949439  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:47.443079  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107035
	I0229 01:48:47.443104  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:47.443112  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:47.443117  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:47.447496  340990 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 01:48:47.447518  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:47.447524  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:47.447527  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:47 GMT
	I0229 01:48:47.447529  340990 round_trippers.go:580]     Audit-Id: 7f2e8a9e-8b07-4e9a-95db-03e8cae48c51
	I0229 01:48:47.447532  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:47.447552  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:47.447561  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:47.447769  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107035","namespace":"kube-system","uid":"65255c97-af0a-4233-b308-e46dfd75a9f9","resourceVersion":"739","creationTimestamp":"2024-02-29T01:38:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.183:2379","kubernetes.io/config.hash":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.mirror":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.seen":"2024-02-29T01:38:16.621157329Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6049 chars]
	I0229 01:48:47.448231  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:47.448251  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:47.448261  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:47.448266  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:47.454082  340990 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 01:48:47.454101  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:47.454108  340990 round_trippers.go:580]     Audit-Id: 27491f71-4b20-466a-9be7-f53cad85190e
	I0229 01:48:47.454112  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:47.454115  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:47.454118  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:47.454121  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:47.454125  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:47 GMT
	I0229 01:48:47.455211  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:47.943321  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107035
	I0229 01:48:47.943351  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:47.943363  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:47.943381  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:47.945751  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:47.945778  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:47.945788  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:47.945794  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:47 GMT
	I0229 01:48:47.945798  340990 round_trippers.go:580]     Audit-Id: 489eecff-d8dd-4ecb-8daa-9fb2891e9d3d
	I0229 01:48:47.945801  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:47.945807  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:47.945819  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:47.946041  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107035","namespace":"kube-system","uid":"65255c97-af0a-4233-b308-e46dfd75a9f9","resourceVersion":"841","creationTimestamp":"2024-02-29T01:38:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.183:2379","kubernetes.io/config.hash":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.mirror":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.seen":"2024-02-29T01:38:16.621157329Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5825 chars]
	I0229 01:48:47.946489  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:47.946503  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:47.946509  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:47.946514  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:47.949785  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:47.949800  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:47.949808  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:47.949812  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:47 GMT
	I0229 01:48:47.949817  340990 round_trippers.go:580]     Audit-Id: bddfdf5d-77d4-4f10-88db-ae8bc7dcdc65
	I0229 01:48:47.949820  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:47.949833  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:47.949838  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:47.950146  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:47.950462  340990 pod_ready.go:92] pod "etcd-multinode-107035" in "kube-system" namespace has status "Ready":"True"
	I0229 01:48:47.950483  340990 pod_ready.go:81] duration metric: took 3.507564765s waiting for pod "etcd-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:47.950508  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:47.950574  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-107035
	I0229 01:48:47.950584  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:47.950593  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:47.950599  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:47.952655  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:47.952676  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:47.952685  340990 round_trippers.go:580]     Audit-Id: 055ca2e6-cf81-4344-8e9b-83494470c2e1
	I0229 01:48:47.952692  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:47.952698  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:47.952702  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:47.952706  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:47.952723  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:47 GMT
	I0229 01:48:47.952899  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-107035","namespace":"kube-system","uid":"c8a5ad6e-c2cc-49a4-8837-ba1b280f87af","resourceVersion":"839","creationTimestamp":"2024-02-29T01:38:23Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.183:8443","kubernetes.io/config.hash":"f8e3f19840dda0faee1ad3a91ae482c1","kubernetes.io/config.mirror":"f8e3f19840dda0faee1ad3a91ae482c1","kubernetes.io/config.seen":"2024-02-29T01:38:16.621158531Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7351 chars]
	I0229 01:48:47.953456  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:47.953478  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:47.953487  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:47.953493  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:47.958693  340990 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 01:48:47.958710  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:47.958717  340990 round_trippers.go:580]     Audit-Id: 7232a696-493e-4181-84d0-a6143b817c78
	I0229 01:48:47.958723  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:47.958729  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:47.958733  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:47.958737  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:47.958740  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:47 GMT
	I0229 01:48:47.958883  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:47.959176  340990 pod_ready.go:92] pod "kube-apiserver-multinode-107035" in "kube-system" namespace has status "Ready":"True"
	I0229 01:48:47.959197  340990 pod_ready.go:81] duration metric: took 8.673259ms waiting for pod "kube-apiserver-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:47.959213  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:47.959282  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-107035
	I0229 01:48:47.959292  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:47.959301  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:47.959310  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:47.961249  340990 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 01:48:47.961263  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:47.961269  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:47.961275  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:47.961280  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:47 GMT
	I0229 01:48:47.961284  340990 round_trippers.go:580]     Audit-Id: 873e80cd-4b7c-467f-94a0-09e40ad72ad3
	I0229 01:48:47.961288  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:47.961291  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:47.961582  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-107035","namespace":"kube-system","uid":"cc34d9e0-d4bd-4fac-8c94-6ead8a744abc","resourceVersion":"834","creationTimestamp":"2024-02-29T01:38:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d885436ac2f1544135b29b38fb6816fc","kubernetes.io/config.mirror":"d885436ac2f1544135b29b38fb6816fc","kubernetes.io/config.seen":"2024-02-29T01:38:23.684826383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6907 chars]
	I0229 01:48:47.961981  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:47.961995  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:47.962005  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:47.962013  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:47.964647  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:47.964668  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:47.964677  340990 round_trippers.go:580]     Audit-Id: 3ded9242-2a26-459b-8d6c-aaaccc7b57a1
	I0229 01:48:47.964681  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:47.964685  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:47.964688  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:47.964691  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:47.964694  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:47 GMT
	I0229 01:48:47.964852  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:47.965254  340990 pod_ready.go:92] pod "kube-controller-manager-multinode-107035" in "kube-system" namespace has status "Ready":"True"
	I0229 01:48:47.965277  340990 pod_ready.go:81] duration metric: took 6.052588ms waiting for pod "kube-controller-manager-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:47.965290  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2vt7v" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:47.965360  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vt7v
	I0229 01:48:47.965371  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:47.965381  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:47.965400  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:47.967760  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:47.967778  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:47.967786  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:47 GMT
	I0229 01:48:47.967791  340990 round_trippers.go:580]     Audit-Id: d150331c-82d3-4413-8545-3a5319bb2bb8
	I0229 01:48:47.967797  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:47.967802  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:47.967806  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:47.967810  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:47.967951  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2vt7v","generateName":"kube-proxy-","namespace":"kube-system","uid":"eaa78334-8191-47e9-b001-343c90a87460","resourceVersion":"466","creationTimestamp":"2024-02-29T01:39:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02144a11-c41b-4c40-be0e-44f538bad496","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:39:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02144a11-c41b-4c40-be0e-44f538bad496\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5488 chars]
	I0229 01:48:47.968428  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m02
	I0229 01:48:47.968443  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:47.968454  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:47.968459  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:47.970456  340990 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 01:48:47.970475  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:47.970482  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:47 GMT
	I0229 01:48:47.970485  340990 round_trippers.go:580]     Audit-Id: 83057072-cdf7-48f1-9076-ebb198f4249a
	I0229 01:48:47.970489  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:47.970492  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:47.970495  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:47.970498  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:47.970790  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035-m02","uid":"ce7e14a9-031d-40ba-b40d-27d557da3a03","resourceVersion":"771","creationTimestamp":"2024-02-29T01:39:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T01_40_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:39:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0229 01:48:47.971034  340990 pod_ready.go:92] pod "kube-proxy-2vt7v" in "kube-system" namespace has status "Ready":"True"
	I0229 01:48:47.971050  340990 pod_ready.go:81] duration metric: took 5.749371ms waiting for pod "kube-proxy-2vt7v" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:47.971058  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7vhtd" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:47.971107  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7vhtd
	I0229 01:48:47.971115  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:47.971121  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:47.971125  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:47.973092  340990 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 01:48:47.973107  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:47.973113  340990 round_trippers.go:580]     Audit-Id: 1d3cd4be-d658-46ac-b761-b4fd38f4d9fd
	I0229 01:48:47.973117  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:47.973120  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:47.973122  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:47.973125  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:47.973129  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:47 GMT
	I0229 01:48:47.973284  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7vhtd","generateName":"kube-proxy-","namespace":"kube-system","uid":"1a552ea7-1d99-46ec-99e1-30ad4ac72ca8","resourceVersion":"775","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02144a11-c41b-4c40-be0e-44f538bad496","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02144a11-c41b-4c40-be0e-44f538bad496\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5484 chars]
	I0229 01:48:48.116852  340990 request.go:629] Waited for 143.225499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:48.116924  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:48.116931  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:48.116941  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:48.116946  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:48.119732  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:48.119760  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:48.119770  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:48.119775  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:48.119779  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:48 GMT
	I0229 01:48:48.119785  340990 round_trippers.go:580]     Audit-Id: 0758de8c-3415-4dfa-ad6c-2f5f8db18653
	I0229 01:48:48.119793  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:48.119796  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:48.120153  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:48.120607  340990 pod_ready.go:92] pod "kube-proxy-7vhtd" in "kube-system" namespace has status "Ready":"True"
	I0229 01:48:48.120635  340990 pod_ready.go:81] duration metric: took 149.569843ms waiting for pod "kube-proxy-7vhtd" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:48.120651  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fhzft" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:48.317100  340990 request.go:629] Waited for 196.356639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fhzft
	I0229 01:48:48.317202  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fhzft
	I0229 01:48:48.317218  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:48.317227  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:48.317232  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:48.320972  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:48.321003  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:48.321014  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:48.321020  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:48.321026  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:48 GMT
	I0229 01:48:48.321031  340990 round_trippers.go:580]     Audit-Id: a0c12b52-2304-4a12-b519-53c293044ff4
	I0229 01:48:48.321036  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:48.321040  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:48.321211  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fhzft","generateName":"kube-proxy-","namespace":"kube-system","uid":"3b05cd87-92a9-4c59-879a-d42c3a08c7d4","resourceVersion":"669","creationTimestamp":"2024-02-29T01:40:04Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02144a11-c41b-4c40-be0e-44f538bad496","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:40:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02144a11-c41b-4c40-be0e-44f538bad496\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5492 chars]
	I0229 01:48:48.517202  340990 request.go:629] Waited for 195.373117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m03
	I0229 01:48:48.517282  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m03
	I0229 01:48:48.517289  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:48.517328  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:48.517339  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:48.520222  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:48.520253  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:48.520266  340990 round_trippers.go:580]     Audit-Id: e80a5f4e-1bad-4be4-bfcd-ee38db029060
	I0229 01:48:48.520273  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:48.520278  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:48.520282  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:48.520286  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:48.520290  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:48 GMT
	I0229 01:48:48.520437  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035-m03","uid":"7068367c-f5dd-4a1d-bba4-904a860289cd","resourceVersion":"830","creationTimestamp":"2024-02-29T01:40:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T01_40_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0229 01:48:48.520749  340990 pod_ready.go:92] pod "kube-proxy-fhzft" in "kube-system" namespace has status "Ready":"True"
	I0229 01:48:48.520773  340990 pod_ready.go:81] duration metric: took 400.110903ms waiting for pod "kube-proxy-fhzft" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:48.520787  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:48.716762  340990 request.go:629] Waited for 195.864193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107035
	I0229 01:48:48.716871  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107035
	I0229 01:48:48.716882  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:48.716893  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:48.716898  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:48.719164  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:48.719182  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:48.719188  340990 round_trippers.go:580]     Audit-Id: 08486f74-0063-48f7-a0a9-73c1cb1f9ef5
	I0229 01:48:48.719193  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:48.719196  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:48.719199  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:48.719201  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:48.719204  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:48 GMT
	I0229 01:48:48.719512  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-107035","namespace":"kube-system","uid":"ac9bc04a-dac0-40f5-b928-4cacd028df82","resourceVersion":"840","creationTimestamp":"2024-02-29T01:38:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ef2538a195901383d6f1be68d27ee2ba","kubernetes.io/config.mirror":"ef2538a195901383d6f1be68d27ee2ba","kubernetes.io/config.seen":"2024-02-29T01:38:23.684827179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4646 chars]
	I0229 01:48:48.917379  340990 request.go:629] Waited for 197.409356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:48.917499  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:48:48.917506  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:48.917513  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:48.917520  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:48.919936  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:48.919961  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:48.919968  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:48.920535  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:48.920601  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:48.920619  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:48.920633  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:48 GMT
	I0229 01:48:48.920645  340990 round_trippers.go:580]     Audit-Id: 033bc606-659d-4c08-975c-6f0b8788cbda
	I0229 01:48:48.920842  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0229 01:48:48.921407  340990 pod_ready.go:92] pod "kube-scheduler-multinode-107035" in "kube-system" namespace has status "Ready":"True"
	I0229 01:48:48.921467  340990 pod_ready.go:81] duration metric: took 400.659383ms waiting for pod "kube-scheduler-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:48:48.921491  340990 pod_ready.go:38] duration metric: took 7.994996487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:48:48.921532  340990 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:48:48.921621  340990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:48:48.939126  340990 command_runner.go:130] > 1061
	I0229 01:48:48.939182  340990 api_server.go:72] duration metric: took 9.667583795s to wait for apiserver process to appear ...
	I0229 01:48:48.939198  340990 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:48:48.939224  340990 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0229 01:48:48.945260  340990 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0229 01:48:48.945343  340990 round_trippers.go:463] GET https://192.168.39.183:8443/version
	I0229 01:48:48.945354  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:48.945366  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:48.945377  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:48.946361  340990 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 01:48:48.946385  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:48.946395  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:48 GMT
	I0229 01:48:48.946401  340990 round_trippers.go:580]     Audit-Id: f1d63c09-a467-41d4-8e1a-920918a25e73
	I0229 01:48:48.946405  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:48.946411  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:48.946415  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:48.946419  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:48.946423  340990 round_trippers.go:580]     Content-Length: 264
	I0229 01:48:48.946469  340990 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 01:48:48.946529  340990 api_server.go:141] control plane version: v1.28.4
	I0229 01:48:48.946549  340990 api_server.go:131] duration metric: took 7.340289ms to wait for apiserver health ...
	I0229 01:48:48.946561  340990 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:48:49.116933  340990 request.go:629] Waited for 170.258315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I0229 01:48:49.117000  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I0229 01:48:49.117006  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:49.117013  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:49.117017  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:49.120700  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:49.120725  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:49.120734  340990 round_trippers.go:580]     Audit-Id: eac2528b-4a50-4c12-96a3-5e960d0541d6
	I0229 01:48:49.120740  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:49.120746  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:49.120754  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:49.120759  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:49.120764  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:49 GMT
	I0229 01:48:49.122517  340990 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"841"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"820","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81437 chars]
	I0229 01:48:49.125060  340990 system_pods.go:59] 12 kube-system pods found
	I0229 01:48:49.125084  340990 system_pods.go:61] "coredns-5dd5756b68-5fqf2" [2730e330-16ca-4b2d-a5dc-330ff37ab57e] Running
	I0229 01:48:49.125088  340990 system_pods.go:61] "etcd-multinode-107035" [65255c97-af0a-4233-b308-e46dfd75a9f9] Running
	I0229 01:48:49.125093  340990 system_pods.go:61] "kindnet-g9fbr" [31f24411-2b54-422d-873f-5826bdb2139a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 01:48:49.125099  340990 system_pods.go:61] "kindnet-hfz2n" [3ba1ea9a-17be-421b-b430-21e867586927] Running
	I0229 01:48:49.125104  340990 system_pods.go:61] "kindnet-tqzhh" [ccf5ad9d-f1ce-41d5-9d35-43618107f5c8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 01:48:49.125111  340990 system_pods.go:61] "kube-apiserver-multinode-107035" [c8a5ad6e-c2cc-49a4-8837-ba1b280f87af] Running
	I0229 01:48:49.125116  340990 system_pods.go:61] "kube-controller-manager-multinode-107035" [cc34d9e0-d4bd-4fac-8c94-6ead8a744abc] Running
	I0229 01:48:49.125118  340990 system_pods.go:61] "kube-proxy-2vt7v" [eaa78334-8191-47e9-b001-343c90a87460] Running
	I0229 01:48:49.125122  340990 system_pods.go:61] "kube-proxy-7vhtd" [1a552ea7-1d99-46ec-99e1-30ad4ac72ca8] Running
	I0229 01:48:49.125127  340990 system_pods.go:61] "kube-proxy-fhzft" [3b05cd87-92a9-4c59-879a-d42c3a08c7d4] Running
	I0229 01:48:49.125130  340990 system_pods.go:61] "kube-scheduler-multinode-107035" [ac9bc04a-dac0-40f5-b928-4cacd028df82] Running
	I0229 01:48:49.125132  340990 system_pods.go:61] "storage-provisioner" [d83d7986-be05-4caf-bec9-ef577b473d77] Running
	I0229 01:48:49.125138  340990 system_pods.go:74] duration metric: took 178.57244ms to wait for pod list to return data ...
	I0229 01:48:49.125149  340990 default_sa.go:34] waiting for default service account to be created ...
	I0229 01:48:49.317587  340990 request.go:629] Waited for 192.3561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I0229 01:48:49.317675  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/default/serviceaccounts
	I0229 01:48:49.317681  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:49.317691  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:49.317723  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:49.321113  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:49.321137  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:49.321146  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:49 GMT
	I0229 01:48:49.321152  340990 round_trippers.go:580]     Audit-Id: 0b9f837e-efb0-499c-b120-7894160fff4c
	I0229 01:48:49.321157  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:49.321161  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:49.321165  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:49.321175  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:49.321179  340990 round_trippers.go:580]     Content-Length: 261
	I0229 01:48:49.321209  340990 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"841"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"e1580abc-709b-4fa4-8047-55764a95c45d","resourceVersion":"300","creationTimestamp":"2024-02-29T01:38:35Z"}}]}
	I0229 01:48:49.321410  340990 default_sa.go:45] found service account: "default"
	I0229 01:48:49.321433  340990 default_sa.go:55] duration metric: took 196.276967ms for default service account to be created ...
	I0229 01:48:49.321447  340990 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 01:48:49.517495  340990 request.go:629] Waited for 195.972366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I0229 01:48:49.517552  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I0229 01:48:49.517557  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:49.517574  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:49.517579  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:49.521481  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:48:49.521508  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:49.521519  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:49.521525  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:49 GMT
	I0229 01:48:49.521530  340990 round_trippers.go:580]     Audit-Id: 13c3dad5-e28e-4f1e-a00a-4a9b4d10a09d
	I0229 01:48:49.521535  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:49.521541  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:49.521547  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:49.523700  340990 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"841"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"820","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81437 chars]
	I0229 01:48:49.525989  340990 system_pods.go:86] 12 kube-system pods found
	I0229 01:48:49.526008  340990 system_pods.go:89] "coredns-5dd5756b68-5fqf2" [2730e330-16ca-4b2d-a5dc-330ff37ab57e] Running
	I0229 01:48:49.526013  340990 system_pods.go:89] "etcd-multinode-107035" [65255c97-af0a-4233-b308-e46dfd75a9f9] Running
	I0229 01:48:49.526021  340990 system_pods.go:89] "kindnet-g9fbr" [31f24411-2b54-422d-873f-5826bdb2139a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 01:48:49.526028  340990 system_pods.go:89] "kindnet-hfz2n" [3ba1ea9a-17be-421b-b430-21e867586927] Running
	I0229 01:48:49.526034  340990 system_pods.go:89] "kindnet-tqzhh" [ccf5ad9d-f1ce-41d5-9d35-43618107f5c8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0229 01:48:49.526039  340990 system_pods.go:89] "kube-apiserver-multinode-107035" [c8a5ad6e-c2cc-49a4-8837-ba1b280f87af] Running
	I0229 01:48:49.526043  340990 system_pods.go:89] "kube-controller-manager-multinode-107035" [cc34d9e0-d4bd-4fac-8c94-6ead8a744abc] Running
	I0229 01:48:49.526050  340990 system_pods.go:89] "kube-proxy-2vt7v" [eaa78334-8191-47e9-b001-343c90a87460] Running
	I0229 01:48:49.526054  340990 system_pods.go:89] "kube-proxy-7vhtd" [1a552ea7-1d99-46ec-99e1-30ad4ac72ca8] Running
	I0229 01:48:49.526057  340990 system_pods.go:89] "kube-proxy-fhzft" [3b05cd87-92a9-4c59-879a-d42c3a08c7d4] Running
	I0229 01:48:49.526062  340990 system_pods.go:89] "kube-scheduler-multinode-107035" [ac9bc04a-dac0-40f5-b928-4cacd028df82] Running
	I0229 01:48:49.526065  340990 system_pods.go:89] "storage-provisioner" [d83d7986-be05-4caf-bec9-ef577b473d77] Running
	I0229 01:48:49.526071  340990 system_pods.go:126] duration metric: took 204.619032ms to wait for k8s-apps to be running ...
	I0229 01:48:49.526084  340990 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 01:48:49.526126  340990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:48:49.543620  340990 system_svc.go:56] duration metric: took 17.524936ms WaitForService to wait for kubelet.
	I0229 01:48:49.543655  340990 kubeadm.go:581] duration metric: took 10.272054865s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 01:48:49.543686  340990 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:48:49.717126  340990 request.go:629] Waited for 173.353373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I0229 01:48:49.717220  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I0229 01:48:49.717232  340990 round_trippers.go:469] Request Headers:
	I0229 01:48:49.717241  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:48:49.717248  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:48:49.720104  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:48:49.720126  340990 round_trippers.go:577] Response Headers:
	I0229 01:48:49.720134  340990 round_trippers.go:580]     Audit-Id: b5d44932-d98b-45e5-8e43-df2031d6801d
	I0229 01:48:49.720137  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:48:49.720141  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:48:49.720144  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:48:49.720147  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:48:49.720150  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:48:49 GMT
	I0229 01:48:49.720442  340990 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"841"},"items":[{"metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"813","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16179 chars]
	I0229 01:48:49.721061  340990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:48:49.721080  340990 node_conditions.go:123] node cpu capacity is 2
	I0229 01:48:49.721090  340990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:48:49.721094  340990 node_conditions.go:123] node cpu capacity is 2
	I0229 01:48:49.721098  340990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:48:49.721102  340990 node_conditions.go:123] node cpu capacity is 2
	I0229 01:48:49.721108  340990 node_conditions.go:105] duration metric: took 177.414871ms to run NodePressure ...
	I0229 01:48:49.721126  340990 start.go:228] waiting for startup goroutines ...
	I0229 01:48:49.721133  340990 start.go:233] waiting for cluster config update ...
	I0229 01:48:49.721142  340990 start.go:242] writing updated cluster config ...
	I0229 01:48:49.721586  340990 config.go:182] Loaded profile config "multinode-107035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:48:49.721665  340990 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/config.json ...
	I0229 01:48:49.723814  340990 out.go:177] * Starting worker node multinode-107035-m02 in cluster multinode-107035
	I0229 01:48:49.724956  340990 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 01:48:49.724974  340990 cache.go:56] Caching tarball of preloaded images
	I0229 01:48:49.725064  340990 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 01:48:49.725075  340990 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 01:48:49.725174  340990 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/config.json ...
	I0229 01:48:49.725337  340990 start.go:365] acquiring machines lock for multinode-107035-m02: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:48:49.725387  340990 start.go:369] acquired machines lock for "multinode-107035-m02" in 29.983µs
	I0229 01:48:49.725400  340990 start.go:96] Skipping create...Using existing machine configuration
	I0229 01:48:49.725408  340990 fix.go:54] fixHost starting: m02
	I0229 01:48:49.725649  340990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:48:49.725680  340990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:48:49.740580  340990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45413
	I0229 01:48:49.741132  340990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:48:49.741653  340990 main.go:141] libmachine: Using API Version  1
	I0229 01:48:49.741677  340990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:48:49.742020  340990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:48:49.742204  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .DriverName
	I0229 01:48:49.742359  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetState
	I0229 01:48:49.743854  340990 fix.go:102] recreateIfNeeded on multinode-107035-m02: state=Running err=<nil>
	W0229 01:48:49.743873  340990 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 01:48:49.745653  340990 out.go:177] * Updating the running kvm2 "multinode-107035-m02" VM ...
	I0229 01:48:49.747022  340990 machine.go:88] provisioning docker machine ...
	I0229 01:48:49.747048  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .DriverName
	I0229 01:48:49.747274  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetMachineName
	I0229 01:48:49.747427  340990 buildroot.go:166] provisioning hostname "multinode-107035-m02"
	I0229 01:48:49.747449  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetMachineName
	I0229 01:48:49.747580  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHHostname
	I0229 01:48:49.749751  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:48:49.750297  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:48:49.750327  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:48:49.750507  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHPort
	I0229 01:48:49.750702  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHKeyPath
	I0229 01:48:49.750834  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHKeyPath
	I0229 01:48:49.750952  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHUsername
	I0229 01:48:49.751098  340990 main.go:141] libmachine: Using SSH client type: native
	I0229 01:48:49.751281  340990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0229 01:48:49.751295  340990 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-107035-m02 && echo "multinode-107035-m02" | sudo tee /etc/hostname
	I0229 01:48:49.877189  340990 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-107035-m02
	
	I0229 01:48:49.877223  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHHostname
	I0229 01:48:49.880359  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:48:49.880756  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:48:49.880785  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:48:49.880954  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHPort
	I0229 01:48:49.881167  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHKeyPath
	I0229 01:48:49.881345  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHKeyPath
	I0229 01:48:49.881489  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHUsername
	I0229 01:48:49.881634  340990 main.go:141] libmachine: Using SSH client type: native
	I0229 01:48:49.881815  340990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0229 01:48:49.881832  340990 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-107035-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-107035-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-107035-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:48:49.987504  340990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:48:49.987536  340990 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 01:48:49.987551  340990 buildroot.go:174] setting up certificates
	I0229 01:48:49.987560  340990 provision.go:83] configureAuth start
	I0229 01:48:49.987569  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetMachineName
	I0229 01:48:49.987867  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetIP
	I0229 01:48:49.990596  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:48:49.990798  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:48:49.990826  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:48:49.990994  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHHostname
	I0229 01:48:49.993288  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:48:49.993604  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:48:49.993632  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:48:49.993789  340990 provision.go:138] copyHostCerts
	I0229 01:48:49.993822  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 01:48:49.993862  340990 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 01:48:49.993880  340990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 01:48:49.993963  340990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 01:48:49.994056  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 01:48:49.994081  340990 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 01:48:49.994089  340990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 01:48:49.994131  340990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 01:48:49.994194  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 01:48:49.994218  340990 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 01:48:49.994241  340990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 01:48:49.994276  340990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 01:48:49.994342  340990 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.multinode-107035-m02 san=[192.168.39.26 192.168.39.26 localhost 127.0.0.1 minikube multinode-107035-m02]
	I0229 01:48:50.168956  340990 provision.go:172] copyRemoteCerts
	I0229 01:48:50.169019  340990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:48:50.169050  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHHostname
	I0229 01:48:50.171843  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:48:50.172213  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:48:50.172243  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:48:50.172399  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHPort
	I0229 01:48:50.172610  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHKeyPath
	I0229 01:48:50.172768  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHUsername
	I0229 01:48:50.172894  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035-m02/id_rsa Username:docker}
	I0229 01:48:50.262378  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 01:48:50.262453  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 01:48:50.290869  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 01:48:50.290945  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0229 01:48:50.317639  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 01:48:50.317711  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 01:48:50.344727  340990 provision.go:86] duration metric: configureAuth took 357.155948ms
	I0229 01:48:50.344754  340990 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:48:50.344950  340990 config.go:182] Loaded profile config "multinode-107035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:48:50.345023  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHHostname
	I0229 01:48:50.347643  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:48:50.348007  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:48:50.348037  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:48:50.348176  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHPort
	I0229 01:48:50.348358  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHKeyPath
	I0229 01:48:50.348492  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHKeyPath
	I0229 01:48:50.348596  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHUsername
	I0229 01:48:50.348719  340990 main.go:141] libmachine: Using SSH client type: native
	I0229 01:48:50.348876  340990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0229 01:48:50.348893  340990 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 01:50:20.810619  340990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 01:50:20.810656  340990 machine.go:91] provisioned docker machine in 1m31.063613757s
	I0229 01:50:20.810684  340990 start.go:300] post-start starting for "multinode-107035-m02" (driver="kvm2")
	I0229 01:50:20.810699  340990 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:50:20.810731  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .DriverName
	I0229 01:50:20.811136  340990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:50:20.811179  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHHostname
	I0229 01:50:20.814368  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:50:20.814799  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:50:20.814821  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:50:20.814968  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHPort
	I0229 01:50:20.815187  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHKeyPath
	I0229 01:50:20.815347  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHUsername
	I0229 01:50:20.815492  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035-m02/id_rsa Username:docker}
	I0229 01:50:20.903200  340990 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:50:20.908173  340990 command_runner.go:130] > NAME=Buildroot
	I0229 01:50:20.908190  340990 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 01:50:20.908194  340990 command_runner.go:130] > ID=buildroot
	I0229 01:50:20.908199  340990 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 01:50:20.908204  340990 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 01:50:20.908233  340990 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:50:20.908244  340990 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 01:50:20.908308  340990 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 01:50:20.908383  340990 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 01:50:20.908394  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> /etc/ssl/certs/3238852.pem
	I0229 01:50:20.908478  340990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 01:50:20.918450  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 01:50:20.947373  340990 start.go:303] post-start completed in 136.673086ms
	I0229 01:50:20.947419  340990 fix.go:56] fixHost completed within 1m31.222011419s
	I0229 01:50:20.947443  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHHostname
	I0229 01:50:20.950047  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:50:20.950414  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:50:20.950441  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:50:20.950605  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHPort
	I0229 01:50:20.950836  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHKeyPath
	I0229 01:50:20.951000  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHKeyPath
	I0229 01:50:20.951124  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHUsername
	I0229 01:50:20.951280  340990 main.go:141] libmachine: Using SSH client type: native
	I0229 01:50:20.951520  340990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0229 01:50:20.951536  340990 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 01:50:21.059590  340990 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709171421.050399107
	
	I0229 01:50:21.059621  340990 fix.go:206] guest clock: 1709171421.050399107
	I0229 01:50:21.059629  340990 fix.go:219] Guest: 2024-02-29 01:50:21.050399107 +0000 UTC Remote: 2024-02-29 01:50:20.947423517 +0000 UTC m=+446.673318595 (delta=102.97559ms)
	I0229 01:50:21.059646  340990 fix.go:190] guest clock delta is within tolerance: 102.97559ms
	I0229 01:50:21.059652  340990 start.go:83] releasing machines lock for "multinode-107035-m02", held for 1m31.334255918s
	I0229 01:50:21.059671  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .DriverName
	I0229 01:50:21.059935  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetIP
	I0229 01:50:21.062271  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:50:21.062715  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:50:21.062747  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:50:21.064643  340990 out.go:177] * Found network options:
	I0229 01:50:21.066004  340990 out.go:177]   - NO_PROXY=192.168.39.183
	W0229 01:50:21.067132  340990 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 01:50:21.067179  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .DriverName
	I0229 01:50:21.067817  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .DriverName
	I0229 01:50:21.068000  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .DriverName
	I0229 01:50:21.068101  340990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:50:21.068139  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHHostname
	W0229 01:50:21.068199  340990 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 01:50:21.068289  340990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 01:50:21.068329  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHHostname
	I0229 01:50:21.070802  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:50:21.071136  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:50:21.071200  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:50:21.071226  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:50:21.071383  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHPort
	I0229 01:50:21.071565  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHKeyPath
	I0229 01:50:21.071728  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHUsername
	I0229 01:50:21.071732  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:50:21.071762  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:50:21.071893  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHPort
	I0229 01:50:21.071903  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035-m02/id_rsa Username:docker}
	I0229 01:50:21.072034  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHKeyPath
	I0229 01:50:21.072216  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHUsername
	I0229 01:50:21.072389  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035-m02/id_rsa Username:docker}
	I0229 01:50:21.323216  340990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 01:50:21.323217  340990 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 01:50:21.330633  340990 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 01:50:21.330675  340990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:50:21.330731  340990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 01:50:21.341091  340990 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 01:50:21.341116  340990 start.go:475] detecting cgroup driver to use...
	I0229 01:50:21.341185  340990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:50:21.359333  340990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:50:21.374888  340990 docker.go:217] disabling cri-docker service (if available) ...
	I0229 01:50:21.374946  340990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 01:50:21.389602  340990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 01:50:21.405298  340990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 01:50:21.535090  340990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 01:50:21.666999  340990 docker.go:233] disabling docker service ...
	I0229 01:50:21.667073  340990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 01:50:21.684630  340990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 01:50:21.700311  340990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 01:50:21.827433  340990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 01:50:21.957511  340990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 01:50:21.972611  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:50:21.997194  340990 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0229 01:50:21.997263  340990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 01:50:21.997310  340990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:50:22.009288  340990 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 01:50:22.009355  340990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:50:22.020821  340990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:50:22.032360  340990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:50:22.043539  340990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:50:22.055926  340990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:50:22.066474  340990 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 01:50:22.066544  340990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:50:22.078385  340990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:50:22.207881  340990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 01:50:22.711296  340990 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 01:50:22.711366  340990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 01:50:22.717512  340990 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0229 01:50:22.717541  340990 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 01:50:22.717552  340990 command_runner.go:130] > Device: 0,22	Inode: 1185        Links: 1
	I0229 01:50:22.717563  340990 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 01:50:22.717571  340990 command_runner.go:130] > Access: 2024-02-29 01:50:22.653322388 +0000
	I0229 01:50:22.717580  340990 command_runner.go:130] > Modify: 2024-02-29 01:50:22.653322388 +0000
	I0229 01:50:22.717589  340990 command_runner.go:130] > Change: 2024-02-29 01:50:22.653322388 +0000
	I0229 01:50:22.717596  340990 command_runner.go:130] >  Birth: -
	I0229 01:50:22.717627  340990 start.go:543] Will wait 60s for crictl version
	I0229 01:50:22.717684  340990 ssh_runner.go:195] Run: which crictl
	I0229 01:50:22.722050  340990 command_runner.go:130] > /usr/bin/crictl
	I0229 01:50:22.722196  340990 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 01:50:22.764611  340990 command_runner.go:130] > Version:  0.1.0
	I0229 01:50:22.764638  340990 command_runner.go:130] > RuntimeName:  cri-o
	I0229 01:50:22.764794  340990 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0229 01:50:22.764960  340990 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 01:50:22.766374  340990 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 01:50:22.766459  340990 ssh_runner.go:195] Run: crio --version
	I0229 01:50:22.803973  340990 command_runner.go:130] > crio version 1.29.1
	I0229 01:50:22.803993  340990 command_runner.go:130] > Version:        1.29.1
	I0229 01:50:22.803999  340990 command_runner.go:130] > GitCommit:      unknown
	I0229 01:50:22.804003  340990 command_runner.go:130] > GitCommitDate:  unknown
	I0229 01:50:22.804007  340990 command_runner.go:130] > GitTreeState:   clean
	I0229 01:50:22.804013  340990 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0229 01:50:22.804017  340990 command_runner.go:130] > GoVersion:      go1.21.6
	I0229 01:50:22.804021  340990 command_runner.go:130] > Compiler:       gc
	I0229 01:50:22.804026  340990 command_runner.go:130] > Platform:       linux/amd64
	I0229 01:50:22.804030  340990 command_runner.go:130] > Linkmode:       dynamic
	I0229 01:50:22.804035  340990 command_runner.go:130] > BuildTags:      
	I0229 01:50:22.804039  340990 command_runner.go:130] >   containers_image_ostree_stub
	I0229 01:50:22.804044  340990 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0229 01:50:22.804050  340990 command_runner.go:130] >   btrfs_noversion
	I0229 01:50:22.804057  340990 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0229 01:50:22.804064  340990 command_runner.go:130] >   libdm_no_deferred_remove
	I0229 01:50:22.804070  340990 command_runner.go:130] >   seccomp
	I0229 01:50:22.804084  340990 command_runner.go:130] > LDFlags:          unknown
	I0229 01:50:22.804097  340990 command_runner.go:130] > SeccompEnabled:   true
	I0229 01:50:22.804104  340990 command_runner.go:130] > AppArmorEnabled:  false
	I0229 01:50:22.804205  340990 ssh_runner.go:195] Run: crio --version
	I0229 01:50:22.837370  340990 command_runner.go:130] > crio version 1.29.1
	I0229 01:50:22.837397  340990 command_runner.go:130] > Version:        1.29.1
	I0229 01:50:22.837406  340990 command_runner.go:130] > GitCommit:      unknown
	I0229 01:50:22.837412  340990 command_runner.go:130] > GitCommitDate:  unknown
	I0229 01:50:22.837418  340990 command_runner.go:130] > GitTreeState:   clean
	I0229 01:50:22.837426  340990 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0229 01:50:22.837432  340990 command_runner.go:130] > GoVersion:      go1.21.6
	I0229 01:50:22.837435  340990 command_runner.go:130] > Compiler:       gc
	I0229 01:50:22.837440  340990 command_runner.go:130] > Platform:       linux/amd64
	I0229 01:50:22.837444  340990 command_runner.go:130] > Linkmode:       dynamic
	I0229 01:50:22.837450  340990 command_runner.go:130] > BuildTags:      
	I0229 01:50:22.837454  340990 command_runner.go:130] >   containers_image_ostree_stub
	I0229 01:50:22.837467  340990 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0229 01:50:22.837470  340990 command_runner.go:130] >   btrfs_noversion
	I0229 01:50:22.837475  340990 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0229 01:50:22.837478  340990 command_runner.go:130] >   libdm_no_deferred_remove
	I0229 01:50:22.837482  340990 command_runner.go:130] >   seccomp
	I0229 01:50:22.837486  340990 command_runner.go:130] > LDFlags:          unknown
	I0229 01:50:22.837490  340990 command_runner.go:130] > SeccompEnabled:   true
	I0229 01:50:22.837493  340990 command_runner.go:130] > AppArmorEnabled:  false
	I0229 01:50:22.839268  340990 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 01:50:22.840392  340990 out.go:177]   - env NO_PROXY=192.168.39.183
	I0229 01:50:22.841434  340990 main.go:141] libmachine: (multinode-107035-m02) Calling .GetIP
	I0229 01:50:22.844198  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:50:22.844609  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:50:22.844642  340990 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:50:22.844794  340990 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 01:50:22.849939  340990 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0229 01:50:22.850055  340990 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035 for IP: 192.168.39.26
	I0229 01:50:22.850084  340990 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:50:22.850277  340990 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 01:50:22.850328  340990 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 01:50:22.850343  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 01:50:22.850358  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 01:50:22.850373  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 01:50:22.850385  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 01:50:22.850433  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 01:50:22.850462  340990 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 01:50:22.850472  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 01:50:22.850492  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 01:50:22.850516  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:50:22.850547  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 01:50:22.850605  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 01:50:22.850643  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:50:22.850663  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem -> /usr/share/ca-certificates/323885.pem
	I0229 01:50:22.850679  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> /usr/share/ca-certificates/3238852.pem
	I0229 01:50:22.851028  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:50:22.879851  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 01:50:22.908004  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:50:22.935127  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 01:50:22.962809  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:50:22.990054  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 01:50:23.017622  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 01:50:23.045376  340990 ssh_runner.go:195] Run: openssl version
	I0229 01:50:23.052014  340990 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 01:50:23.052091  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:50:23.064512  340990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:50:23.069782  340990 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:50:23.069900  340990 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:50:23.069954  340990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:50:23.076271  340990 command_runner.go:130] > b5213941
	I0229 01:50:23.076337  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:50:23.086620  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 01:50:23.099192  340990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 01:50:23.104937  340990 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 01:50:23.105003  340990 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 01:50:23.105066  340990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 01:50:23.111460  340990 command_runner.go:130] > 51391683
	I0229 01:50:23.111619  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 01:50:23.122265  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 01:50:23.134073  340990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 01:50:23.139857  340990 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 01:50:23.139900  340990 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 01:50:23.139945  340990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 01:50:23.146320  340990 command_runner.go:130] > 3ec20f2e
	I0229 01:50:23.146400  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 01:50:23.156750  340990 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:50:23.161348  340990 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 01:50:23.161443  340990 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 01:50:23.161541  340990 ssh_runner.go:195] Run: crio config
	I0229 01:50:23.197286  340990 command_runner.go:130] ! time="2024-02-29 01:50:23.188190288Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0229 01:50:23.204313  340990 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0229 01:50:23.209396  340990 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0229 01:50:23.209417  340990 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0229 01:50:23.209428  340990 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0229 01:50:23.209433  340990 command_runner.go:130] > #
	I0229 01:50:23.209444  340990 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0229 01:50:23.209457  340990 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0229 01:50:23.209471  340990 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0229 01:50:23.209484  340990 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0229 01:50:23.209493  340990 command_runner.go:130] > # reload'.
	I0229 01:50:23.209501  340990 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0229 01:50:23.209513  340990 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0229 01:50:23.209527  340990 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0229 01:50:23.209539  340990 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0229 01:50:23.209543  340990 command_runner.go:130] > [crio]
	I0229 01:50:23.209551  340990 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0229 01:50:23.209563  340990 command_runner.go:130] > # containers images, in this directory.
	I0229 01:50:23.209571  340990 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0229 01:50:23.209585  340990 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0229 01:50:23.209592  340990 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0229 01:50:23.209608  340990 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0229 01:50:23.209615  340990 command_runner.go:130] > # imagestore = ""
	I0229 01:50:23.209621  340990 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0229 01:50:23.209631  340990 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0229 01:50:23.209635  340990 command_runner.go:130] > storage_driver = "overlay"
	I0229 01:50:23.209641  340990 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0229 01:50:23.209648  340990 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0229 01:50:23.209652  340990 command_runner.go:130] > storage_option = [
	I0229 01:50:23.209657  340990 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0229 01:50:23.209663  340990 command_runner.go:130] > ]
	I0229 01:50:23.209669  340990 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0229 01:50:23.209675  340990 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0229 01:50:23.209680  340990 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0229 01:50:23.209685  340990 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0229 01:50:23.209691  340990 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0229 01:50:23.209696  340990 command_runner.go:130] > # always happen on a node reboot
	I0229 01:50:23.209700  340990 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0229 01:50:23.209710  340990 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0229 01:50:23.209715  340990 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0229 01:50:23.209728  340990 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0229 01:50:23.209734  340990 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0229 01:50:23.209741  340990 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0229 01:50:23.209749  340990 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0229 01:50:23.209753  340990 command_runner.go:130] > # internal_wipe = true
	I0229 01:50:23.209760  340990 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0229 01:50:23.209766  340990 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0229 01:50:23.209770  340990 command_runner.go:130] > # internal_repair = false
	I0229 01:50:23.209778  340990 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0229 01:50:23.209783  340990 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0229 01:50:23.209789  340990 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0229 01:50:23.209795  340990 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0229 01:50:23.209800  340990 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0229 01:50:23.209806  340990 command_runner.go:130] > [crio.api]
	I0229 01:50:23.209811  340990 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0229 01:50:23.209821  340990 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0229 01:50:23.209833  340990 command_runner.go:130] > # IP address on which the stream server will listen.
	I0229 01:50:23.209837  340990 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0229 01:50:23.209843  340990 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0229 01:50:23.209848  340990 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0229 01:50:23.209852  340990 command_runner.go:130] > # stream_port = "0"
	I0229 01:50:23.209857  340990 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0229 01:50:23.209862  340990 command_runner.go:130] > # stream_enable_tls = false
	I0229 01:50:23.209868  340990 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0229 01:50:23.209875  340990 command_runner.go:130] > # stream_idle_timeout = ""
	I0229 01:50:23.209881  340990 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0229 01:50:23.209895  340990 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0229 01:50:23.209901  340990 command_runner.go:130] > # minutes.
	I0229 01:50:23.209905  340990 command_runner.go:130] > # stream_tls_cert = ""
	I0229 01:50:23.209912  340990 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0229 01:50:23.209918  340990 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0229 01:50:23.209924  340990 command_runner.go:130] > # stream_tls_key = ""
	I0229 01:50:23.209930  340990 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0229 01:50:23.209938  340990 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0229 01:50:23.209962  340990 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0229 01:50:23.209969  340990 command_runner.go:130] > # stream_tls_ca = ""
	I0229 01:50:23.209977  340990 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0229 01:50:23.209982  340990 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0229 01:50:23.209991  340990 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0229 01:50:23.210000  340990 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0229 01:50:23.210007  340990 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0229 01:50:23.210015  340990 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0229 01:50:23.210021  340990 command_runner.go:130] > [crio.runtime]
	I0229 01:50:23.210027  340990 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0229 01:50:23.210035  340990 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0229 01:50:23.210038  340990 command_runner.go:130] > # "nofile=1024:2048"
	I0229 01:50:23.210045  340990 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0229 01:50:23.210051  340990 command_runner.go:130] > # default_ulimits = [
	I0229 01:50:23.210055  340990 command_runner.go:130] > # ]
	I0229 01:50:23.210063  340990 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0229 01:50:23.210068  340990 command_runner.go:130] > # no_pivot = false
	I0229 01:50:23.210074  340990 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0229 01:50:23.210082  340990 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0229 01:50:23.210089  340990 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0229 01:50:23.210094  340990 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0229 01:50:23.210101  340990 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0229 01:50:23.210107  340990 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0229 01:50:23.210114  340990 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0229 01:50:23.210119  340990 command_runner.go:130] > # Cgroup setting for conmon
	I0229 01:50:23.210128  340990 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0229 01:50:23.210137  340990 command_runner.go:130] > conmon_cgroup = "pod"
	I0229 01:50:23.210145  340990 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0229 01:50:23.210151  340990 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0229 01:50:23.210159  340990 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0229 01:50:23.210165  340990 command_runner.go:130] > conmon_env = [
	I0229 01:50:23.210170  340990 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0229 01:50:23.210176  340990 command_runner.go:130] > ]
	I0229 01:50:23.210181  340990 command_runner.go:130] > # Additional environment variables to set for all the
	I0229 01:50:23.210186  340990 command_runner.go:130] > # containers. These are overridden if set in the
	I0229 01:50:23.210195  340990 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0229 01:50:23.210199  340990 command_runner.go:130] > # default_env = [
	I0229 01:50:23.210204  340990 command_runner.go:130] > # ]
	I0229 01:50:23.210213  340990 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0229 01:50:23.210237  340990 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0229 01:50:23.210248  340990 command_runner.go:130] > # selinux = false
	I0229 01:50:23.210256  340990 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0229 01:50:23.210265  340990 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0229 01:50:23.210271  340990 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0229 01:50:23.210277  340990 command_runner.go:130] > # seccomp_profile = ""
	I0229 01:50:23.210282  340990 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0229 01:50:23.210290  340990 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0229 01:50:23.210298  340990 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0229 01:50:23.210303  340990 command_runner.go:130] > # which might increase security.
	I0229 01:50:23.210307  340990 command_runner.go:130] > # This option is currently deprecated,
	I0229 01:50:23.210317  340990 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0229 01:50:23.210325  340990 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0229 01:50:23.210333  340990 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0229 01:50:23.210341  340990 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0229 01:50:23.210347  340990 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0229 01:50:23.210356  340990 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0229 01:50:23.210363  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:50:23.210371  340990 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0229 01:50:23.210376  340990 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0229 01:50:23.210383  340990 command_runner.go:130] > # the cgroup blockio controller.
	I0229 01:50:23.210387  340990 command_runner.go:130] > # blockio_config_file = ""
	I0229 01:50:23.210393  340990 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0229 01:50:23.210399  340990 command_runner.go:130] > # blockio parameters.
	I0229 01:50:23.210404  340990 command_runner.go:130] > # blockio_reload = false
	I0229 01:50:23.210412  340990 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0229 01:50:23.210417  340990 command_runner.go:130] > # irqbalance daemon.
	I0229 01:50:23.210424  340990 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0229 01:50:23.210430  340990 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0229 01:50:23.210439  340990 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0229 01:50:23.210447  340990 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0229 01:50:23.210455  340990 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0229 01:50:23.210464  340990 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0229 01:50:23.210471  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:50:23.210475  340990 command_runner.go:130] > # rdt_config_file = ""
	I0229 01:50:23.210483  340990 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0229 01:50:23.210487  340990 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0229 01:50:23.210504  340990 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0229 01:50:23.210510  340990 command_runner.go:130] > # separate_pull_cgroup = ""
	I0229 01:50:23.210516  340990 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0229 01:50:23.210524  340990 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0229 01:50:23.210528  340990 command_runner.go:130] > # will be added.
	I0229 01:50:23.210534  340990 command_runner.go:130] > # default_capabilities = [
	I0229 01:50:23.210537  340990 command_runner.go:130] > # 	"CHOWN",
	I0229 01:50:23.210543  340990 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0229 01:50:23.210546  340990 command_runner.go:130] > # 	"FSETID",
	I0229 01:50:23.210552  340990 command_runner.go:130] > # 	"FOWNER",
	I0229 01:50:23.210556  340990 command_runner.go:130] > # 	"SETGID",
	I0229 01:50:23.210562  340990 command_runner.go:130] > # 	"SETUID",
	I0229 01:50:23.210566  340990 command_runner.go:130] > # 	"SETPCAP",
	I0229 01:50:23.210572  340990 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0229 01:50:23.210576  340990 command_runner.go:130] > # 	"KILL",
	I0229 01:50:23.210581  340990 command_runner.go:130] > # ]
	I0229 01:50:23.210588  340990 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0229 01:50:23.210596  340990 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0229 01:50:23.210603  340990 command_runner.go:130] > # add_inheritable_capabilities = false
	I0229 01:50:23.210609  340990 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0229 01:50:23.210617  340990 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0229 01:50:23.210623  340990 command_runner.go:130] > # default_sysctls = [
	I0229 01:50:23.210626  340990 command_runner.go:130] > # ]
	I0229 01:50:23.210634  340990 command_runner.go:130] > # List of devices on the host that a
	I0229 01:50:23.210642  340990 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0229 01:50:23.210647  340990 command_runner.go:130] > # allowed_devices = [
	I0229 01:50:23.210650  340990 command_runner.go:130] > # 	"/dev/fuse",
	I0229 01:50:23.210656  340990 command_runner.go:130] > # ]
	I0229 01:50:23.210660  340990 command_runner.go:130] > # List of additional devices. specified as
	I0229 01:50:23.210670  340990 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0229 01:50:23.210677  340990 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0229 01:50:23.210683  340990 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0229 01:50:23.210689  340990 command_runner.go:130] > # additional_devices = [
	I0229 01:50:23.210692  340990 command_runner.go:130] > # ]
	I0229 01:50:23.210697  340990 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0229 01:50:23.210703  340990 command_runner.go:130] > # cdi_spec_dirs = [
	I0229 01:50:23.210707  340990 command_runner.go:130] > # 	"/etc/cdi",
	I0229 01:50:23.210713  340990 command_runner.go:130] > # 	"/var/run/cdi",
	I0229 01:50:23.210717  340990 command_runner.go:130] > # ]
	I0229 01:50:23.210725  340990 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0229 01:50:23.210733  340990 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0229 01:50:23.210739  340990 command_runner.go:130] > # Defaults to false.
	I0229 01:50:23.210744  340990 command_runner.go:130] > # device_ownership_from_security_context = false
	I0229 01:50:23.210752  340990 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0229 01:50:23.210759  340990 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0229 01:50:23.210765  340990 command_runner.go:130] > # hooks_dir = [
	I0229 01:50:23.210769  340990 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0229 01:50:23.210774  340990 command_runner.go:130] > # ]
	I0229 01:50:23.210780  340990 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0229 01:50:23.210787  340990 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0229 01:50:23.210794  340990 command_runner.go:130] > # its default mounts from the following two files:
	I0229 01:50:23.210797  340990 command_runner.go:130] > #
	I0229 01:50:23.210803  340990 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0229 01:50:23.210811  340990 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0229 01:50:23.210822  340990 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0229 01:50:23.210827  340990 command_runner.go:130] > #
	I0229 01:50:23.210835  340990 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0229 01:50:23.210844  340990 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0229 01:50:23.210852  340990 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0229 01:50:23.210859  340990 command_runner.go:130] > #      only add mounts it finds in this file.
	I0229 01:50:23.210862  340990 command_runner.go:130] > #
	I0229 01:50:23.210866  340990 command_runner.go:130] > # default_mounts_file = ""
	I0229 01:50:23.210873  340990 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0229 01:50:23.210880  340990 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0229 01:50:23.210886  340990 command_runner.go:130] > pids_limit = 1024
	I0229 01:50:23.210892  340990 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0229 01:50:23.210900  340990 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0229 01:50:23.210906  340990 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0229 01:50:23.210916  340990 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0229 01:50:23.210923  340990 command_runner.go:130] > # log_size_max = -1
	I0229 01:50:23.210931  340990 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0229 01:50:23.210939  340990 command_runner.go:130] > # log_to_journald = false
	I0229 01:50:23.210946  340990 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0229 01:50:23.210951  340990 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0229 01:50:23.210959  340990 command_runner.go:130] > # Path to directory for container attach sockets.
	I0229 01:50:23.210963  340990 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0229 01:50:23.210971  340990 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0229 01:50:23.210977  340990 command_runner.go:130] > # bind_mount_prefix = ""
	I0229 01:50:23.210983  340990 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0229 01:50:23.210989  340990 command_runner.go:130] > # read_only = false
	I0229 01:50:23.210995  340990 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0229 01:50:23.211003  340990 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0229 01:50:23.211010  340990 command_runner.go:130] > # live configuration reload.
	I0229 01:50:23.211013  340990 command_runner.go:130] > # log_level = "info"
	I0229 01:50:23.211019  340990 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0229 01:50:23.211026  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:50:23.211030  340990 command_runner.go:130] > # log_filter = ""
	I0229 01:50:23.211038  340990 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0229 01:50:23.211045  340990 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0229 01:50:23.211051  340990 command_runner.go:130] > # separated by comma.
	I0229 01:50:23.211058  340990 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 01:50:23.211064  340990 command_runner.go:130] > # uid_mappings = ""
	I0229 01:50:23.211070  340990 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0229 01:50:23.211078  340990 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0229 01:50:23.211084  340990 command_runner.go:130] > # separated by comma.
	I0229 01:50:23.211091  340990 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 01:50:23.211096  340990 command_runner.go:130] > # gid_mappings = ""
	I0229 01:50:23.211102  340990 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0229 01:50:23.211110  340990 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0229 01:50:23.211117  340990 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0229 01:50:23.211127  340990 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 01:50:23.211133  340990 command_runner.go:130] > # minimum_mappable_uid = -1
	I0229 01:50:23.211139  340990 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0229 01:50:23.211147  340990 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0229 01:50:23.211155  340990 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0229 01:50:23.211164  340990 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 01:50:23.211172  340990 command_runner.go:130] > # minimum_mappable_gid = -1
	I0229 01:50:23.211180  340990 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0229 01:50:23.211188  340990 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0229 01:50:23.211193  340990 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0229 01:50:23.211197  340990 command_runner.go:130] > # ctr_stop_timeout = 30
	I0229 01:50:23.211203  340990 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0229 01:50:23.211210  340990 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0229 01:50:23.211215  340990 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0229 01:50:23.211221  340990 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0229 01:50:23.211226  340990 command_runner.go:130] > drop_infra_ctr = false
	I0229 01:50:23.211233  340990 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0229 01:50:23.211251  340990 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0229 01:50:23.211260  340990 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0229 01:50:23.211266  340990 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0229 01:50:23.211273  340990 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0229 01:50:23.211280  340990 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0229 01:50:23.211286  340990 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0229 01:50:23.211294  340990 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0229 01:50:23.211297  340990 command_runner.go:130] > # shared_cpuset = ""
	I0229 01:50:23.211305  340990 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0229 01:50:23.211311  340990 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0229 01:50:23.211319  340990 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0229 01:50:23.211326  340990 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0229 01:50:23.211332  340990 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0229 01:50:23.211337  340990 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0229 01:50:23.211345  340990 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0229 01:50:23.211352  340990 command_runner.go:130] > # enable_criu_support = false
	I0229 01:50:23.211357  340990 command_runner.go:130] > # Enable/disable the generation of the container,
	I0229 01:50:23.211364  340990 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0229 01:50:23.211370  340990 command_runner.go:130] > # enable_pod_events = false
	I0229 01:50:23.211376  340990 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0229 01:50:23.211381  340990 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0229 01:50:23.211388  340990 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0229 01:50:23.211392  340990 command_runner.go:130] > # default_runtime = "runc"
	I0229 01:50:23.211399  340990 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0229 01:50:23.211406  340990 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0229 01:50:23.211419  340990 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0229 01:50:23.211425  340990 command_runner.go:130] > # creation as a file is not desired either.
	I0229 01:50:23.211433  340990 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0229 01:50:23.211440  340990 command_runner.go:130] > # the hostname is being managed dynamically.
	I0229 01:50:23.211445  340990 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0229 01:50:23.211449  340990 command_runner.go:130] > # ]
	I0229 01:50:23.211455  340990 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0229 01:50:23.211463  340990 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0229 01:50:23.211468  340990 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0229 01:50:23.211475  340990 command_runner.go:130] > # Each entry in the table should follow the format:
	I0229 01:50:23.211478  340990 command_runner.go:130] > #
	I0229 01:50:23.211482  340990 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0229 01:50:23.211489  340990 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0229 01:50:23.211493  340990 command_runner.go:130] > # runtime_type = "oci"
	I0229 01:50:23.211517  340990 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0229 01:50:23.211523  340990 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0229 01:50:23.211528  340990 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0229 01:50:23.211533  340990 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0229 01:50:23.211539  340990 command_runner.go:130] > # monitor_env = []
	I0229 01:50:23.211544  340990 command_runner.go:130] > # privileged_without_host_devices = false
	I0229 01:50:23.211550  340990 command_runner.go:130] > # allowed_annotations = []
	I0229 01:50:23.211555  340990 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0229 01:50:23.211558  340990 command_runner.go:130] > # Where:
	I0229 01:50:23.211566  340990 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0229 01:50:23.211572  340990 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0229 01:50:23.211580  340990 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0229 01:50:23.211588  340990 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0229 01:50:23.211594  340990 command_runner.go:130] > #   in $PATH.
	I0229 01:50:23.211600  340990 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0229 01:50:23.211606  340990 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0229 01:50:23.211614  340990 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0229 01:50:23.211619  340990 command_runner.go:130] > #   state.
	I0229 01:50:23.211626  340990 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0229 01:50:23.211634  340990 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0229 01:50:23.211642  340990 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0229 01:50:23.211647  340990 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0229 01:50:23.211659  340990 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0229 01:50:23.211668  340990 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0229 01:50:23.211675  340990 command_runner.go:130] > #   The currently recognized values are:
	I0229 01:50:23.211681  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0229 01:50:23.211690  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0229 01:50:23.211698  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0229 01:50:23.211706  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0229 01:50:23.211715  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0229 01:50:23.211724  340990 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0229 01:50:23.211732  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0229 01:50:23.211738  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0229 01:50:23.211746  340990 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0229 01:50:23.211754  340990 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0229 01:50:23.211760  340990 command_runner.go:130] > #   deprecated option "conmon".
	I0229 01:50:23.211767  340990 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0229 01:50:23.211773  340990 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0229 01:50:23.211780  340990 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0229 01:50:23.211787  340990 command_runner.go:130] > #   should be moved to the container's cgroup
	I0229 01:50:23.211793  340990 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0229 01:50:23.211800  340990 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0229 01:50:23.211806  340990 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0229 01:50:23.211817  340990 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0229 01:50:23.211823  340990 command_runner.go:130] > #
	I0229 01:50:23.211827  340990 command_runner.go:130] > # Using the seccomp notifier feature:
	I0229 01:50:23.211830  340990 command_runner.go:130] > #
	I0229 01:50:23.211838  340990 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0229 01:50:23.211844  340990 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0229 01:50:23.211849  340990 command_runner.go:130] > #
	I0229 01:50:23.211855  340990 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0229 01:50:23.211863  340990 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0229 01:50:23.211869  340990 command_runner.go:130] > #
	I0229 01:50:23.211875  340990 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0229 01:50:23.211881  340990 command_runner.go:130] > # feature.
	I0229 01:50:23.211884  340990 command_runner.go:130] > #
	I0229 01:50:23.211891  340990 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0229 01:50:23.211899  340990 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0229 01:50:23.211906  340990 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0229 01:50:23.211914  340990 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0229 01:50:23.211920  340990 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0229 01:50:23.211925  340990 command_runner.go:130] > #
	I0229 01:50:23.211931  340990 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0229 01:50:23.211939  340990 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0229 01:50:23.211944  340990 command_runner.go:130] > #
	I0229 01:50:23.211950  340990 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0229 01:50:23.211957  340990 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0229 01:50:23.211963  340990 command_runner.go:130] > #
	I0229 01:50:23.211969  340990 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0229 01:50:23.211977  340990 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0229 01:50:23.211981  340990 command_runner.go:130] > # limitation.
	I0229 01:50:23.211985  340990 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0229 01:50:23.211992  340990 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0229 01:50:23.211996  340990 command_runner.go:130] > runtime_type = "oci"
	I0229 01:50:23.212002  340990 command_runner.go:130] > runtime_root = "/run/runc"
	I0229 01:50:23.212006  340990 command_runner.go:130] > runtime_config_path = ""
	I0229 01:50:23.212013  340990 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0229 01:50:23.212017  340990 command_runner.go:130] > monitor_cgroup = "pod"
	I0229 01:50:23.212023  340990 command_runner.go:130] > monitor_exec_cgroup = ""
	I0229 01:50:23.212027  340990 command_runner.go:130] > monitor_env = [
	I0229 01:50:23.212034  340990 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0229 01:50:23.212037  340990 command_runner.go:130] > ]
	I0229 01:50:23.212042  340990 command_runner.go:130] > privileged_without_host_devices = false
	I0229 01:50:23.212051  340990 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0229 01:50:23.212056  340990 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0229 01:50:23.212064  340990 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0229 01:50:23.212073  340990 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0229 01:50:23.212082  340990 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0229 01:50:23.212089  340990 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0229 01:50:23.212100  340990 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0229 01:50:23.212110  340990 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0229 01:50:23.212118  340990 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0229 01:50:23.212127  340990 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0229 01:50:23.212133  340990 command_runner.go:130] > # Example:
	I0229 01:50:23.212138  340990 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0229 01:50:23.212145  340990 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0229 01:50:23.212150  340990 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0229 01:50:23.212157  340990 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0229 01:50:23.212160  340990 command_runner.go:130] > # cpuset = 0
	I0229 01:50:23.212167  340990 command_runner.go:130] > # cpushares = "0-1"
	I0229 01:50:23.212170  340990 command_runner.go:130] > # Where:
	I0229 01:50:23.212174  340990 command_runner.go:130] > # The workload name is workload-type.
	I0229 01:50:23.212181  340990 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0229 01:50:23.212188  340990 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0229 01:50:23.212194  340990 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0229 01:50:23.212203  340990 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0229 01:50:23.212211  340990 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0229 01:50:23.212218  340990 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0229 01:50:23.212224  340990 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0229 01:50:23.212231  340990 command_runner.go:130] > # Default value is set to true
	I0229 01:50:23.212235  340990 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0229 01:50:23.212243  340990 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0229 01:50:23.212250  340990 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0229 01:50:23.212254  340990 command_runner.go:130] > # Default value is set to 'false'
	I0229 01:50:23.212261  340990 command_runner.go:130] > # disable_hostport_mapping = false
	I0229 01:50:23.212267  340990 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0229 01:50:23.212272  340990 command_runner.go:130] > #
	I0229 01:50:23.212277  340990 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0229 01:50:23.212285  340990 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0229 01:50:23.212292  340990 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0229 01:50:23.212300  340990 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0229 01:50:23.212308  340990 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0229 01:50:23.212312  340990 command_runner.go:130] > [crio.image]
	I0229 01:50:23.212319  340990 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0229 01:50:23.212326  340990 command_runner.go:130] > # default_transport = "docker://"
	I0229 01:50:23.212332  340990 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0229 01:50:23.212340  340990 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0229 01:50:23.212346  340990 command_runner.go:130] > # global_auth_file = ""
	I0229 01:50:23.212351  340990 command_runner.go:130] > # The image used to instantiate infra containers.
	I0229 01:50:23.212358  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:50:23.212364  340990 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0229 01:50:23.212372  340990 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0229 01:50:23.212380  340990 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0229 01:50:23.212385  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:50:23.212391  340990 command_runner.go:130] > # pause_image_auth_file = ""
	I0229 01:50:23.212396  340990 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0229 01:50:23.212404  340990 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0229 01:50:23.212410  340990 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0229 01:50:23.212419  340990 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0229 01:50:23.212425  340990 command_runner.go:130] > # pause_command = "/pause"
	I0229 01:50:23.212431  340990 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0229 01:50:23.212438  340990 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0229 01:50:23.212444  340990 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0229 01:50:23.212451  340990 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0229 01:50:23.212461  340990 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0229 01:50:23.212468  340990 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0229 01:50:23.212474  340990 command_runner.go:130] > # pinned_images = [
	I0229 01:50:23.212477  340990 command_runner.go:130] > # ]
	I0229 01:50:23.212484  340990 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0229 01:50:23.212492  340990 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0229 01:50:23.212500  340990 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0229 01:50:23.212508  340990 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0229 01:50:23.212515  340990 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0229 01:50:23.212519  340990 command_runner.go:130] > # signature_policy = ""
	I0229 01:50:23.212526  340990 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0229 01:50:23.212532  340990 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0229 01:50:23.212540  340990 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0229 01:50:23.212550  340990 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0229 01:50:23.212557  340990 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0229 01:50:23.212564  340990 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0229 01:50:23.212570  340990 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0229 01:50:23.212579  340990 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0229 01:50:23.212585  340990 command_runner.go:130] > # changing them here.
	I0229 01:50:23.212589  340990 command_runner.go:130] > # insecure_registries = [
	I0229 01:50:23.212595  340990 command_runner.go:130] > # ]
	I0229 01:50:23.212601  340990 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0229 01:50:23.212608  340990 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0229 01:50:23.212612  340990 command_runner.go:130] > # image_volumes = "mkdir"
	I0229 01:50:23.212619  340990 command_runner.go:130] > # Temporary directory to use for storing big files
	I0229 01:50:23.212627  340990 command_runner.go:130] > # big_files_temporary_dir = ""
	I0229 01:50:23.212633  340990 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0229 01:50:23.212639  340990 command_runner.go:130] > # CNI plugins.
	I0229 01:50:23.212643  340990 command_runner.go:130] > [crio.network]
	I0229 01:50:23.212651  340990 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0229 01:50:23.212658  340990 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0229 01:50:23.212662  340990 command_runner.go:130] > # cni_default_network = ""
	I0229 01:50:23.212670  340990 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0229 01:50:23.212674  340990 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0229 01:50:23.212681  340990 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0229 01:50:23.212685  340990 command_runner.go:130] > # plugin_dirs = [
	I0229 01:50:23.212692  340990 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0229 01:50:23.212695  340990 command_runner.go:130] > # ]
	I0229 01:50:23.212700  340990 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0229 01:50:23.212704  340990 command_runner.go:130] > [crio.metrics]
	I0229 01:50:23.212709  340990 command_runner.go:130] > # Globally enable or disable metrics support.
	I0229 01:50:23.212715  340990 command_runner.go:130] > enable_metrics = true
	I0229 01:50:23.212719  340990 command_runner.go:130] > # Specify enabled metrics collectors.
	I0229 01:50:23.212726  340990 command_runner.go:130] > # Per default all metrics are enabled.
	I0229 01:50:23.212732  340990 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0229 01:50:23.212739  340990 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0229 01:50:23.212745  340990 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0229 01:50:23.212751  340990 command_runner.go:130] > # metrics_collectors = [
	I0229 01:50:23.212754  340990 command_runner.go:130] > # 	"operations",
	I0229 01:50:23.212758  340990 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0229 01:50:23.212763  340990 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0229 01:50:23.212767  340990 command_runner.go:130] > # 	"operations_errors",
	I0229 01:50:23.212772  340990 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0229 01:50:23.212776  340990 command_runner.go:130] > # 	"image_pulls_by_name",
	I0229 01:50:23.212782  340990 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0229 01:50:23.212786  340990 command_runner.go:130] > # 	"image_pulls_failures",
	I0229 01:50:23.212790  340990 command_runner.go:130] > # 	"image_pulls_successes",
	I0229 01:50:23.212795  340990 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0229 01:50:23.212800  340990 command_runner.go:130] > # 	"image_layer_reuse",
	I0229 01:50:23.212808  340990 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0229 01:50:23.212811  340990 command_runner.go:130] > # 	"containers_oom_total",
	I0229 01:50:23.212819  340990 command_runner.go:130] > # 	"containers_oom",
	I0229 01:50:23.212825  340990 command_runner.go:130] > # 	"processes_defunct",
	I0229 01:50:23.212829  340990 command_runner.go:130] > # 	"operations_total",
	I0229 01:50:23.212833  340990 command_runner.go:130] > # 	"operations_latency_seconds",
	I0229 01:50:23.212837  340990 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0229 01:50:23.212841  340990 command_runner.go:130] > # 	"operations_errors_total",
	I0229 01:50:23.212845  340990 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0229 01:50:23.212850  340990 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0229 01:50:23.212854  340990 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0229 01:50:23.212860  340990 command_runner.go:130] > # 	"image_pulls_success_total",
	I0229 01:50:23.212864  340990 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0229 01:50:23.212868  340990 command_runner.go:130] > # 	"containers_oom_count_total",
	I0229 01:50:23.212873  340990 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0229 01:50:23.212877  340990 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0229 01:50:23.212880  340990 command_runner.go:130] > # ]
	I0229 01:50:23.212884  340990 command_runner.go:130] > # The port on which the metrics server will listen.
	I0229 01:50:23.212890  340990 command_runner.go:130] > # metrics_port = 9090
	I0229 01:50:23.212895  340990 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0229 01:50:23.212901  340990 command_runner.go:130] > # metrics_socket = ""
	I0229 01:50:23.212906  340990 command_runner.go:130] > # The certificate for the secure metrics server.
	I0229 01:50:23.212914  340990 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0229 01:50:23.212921  340990 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0229 01:50:23.212927  340990 command_runner.go:130] > # certificate on any modification event.
	I0229 01:50:23.212931  340990 command_runner.go:130] > # metrics_cert = ""
	I0229 01:50:23.212939  340990 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0229 01:50:23.212943  340990 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0229 01:50:23.212949  340990 command_runner.go:130] > # metrics_key = ""
	I0229 01:50:23.212954  340990 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0229 01:50:23.212960  340990 command_runner.go:130] > [crio.tracing]
	I0229 01:50:23.212965  340990 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0229 01:50:23.212971  340990 command_runner.go:130] > # enable_tracing = false
	I0229 01:50:23.212977  340990 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0229 01:50:23.212983  340990 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0229 01:50:23.212990  340990 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0229 01:50:23.212997  340990 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0229 01:50:23.213002  340990 command_runner.go:130] > # CRI-O NRI configuration.
	I0229 01:50:23.213007  340990 command_runner.go:130] > [crio.nri]
	I0229 01:50:23.213011  340990 command_runner.go:130] > # Globally enable or disable NRI.
	I0229 01:50:23.213017  340990 command_runner.go:130] > # enable_nri = false
	I0229 01:50:23.213022  340990 command_runner.go:130] > # NRI socket to listen on.
	I0229 01:50:23.213029  340990 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0229 01:50:23.213033  340990 command_runner.go:130] > # NRI plugin directory to use.
	I0229 01:50:23.213039  340990 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0229 01:50:23.213044  340990 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0229 01:50:23.213050  340990 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0229 01:50:23.213055  340990 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0229 01:50:23.213060  340990 command_runner.go:130] > # nri_disable_connections = false
	I0229 01:50:23.213065  340990 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0229 01:50:23.213071  340990 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0229 01:50:23.213076  340990 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0229 01:50:23.213083  340990 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0229 01:50:23.213089  340990 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0229 01:50:23.213094  340990 command_runner.go:130] > [crio.stats]
	I0229 01:50:23.213100  340990 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0229 01:50:23.213107  340990 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0229 01:50:23.213111  340990 command_runner.go:130] > # stats_collection_period = 0
	I0229 01:50:23.213197  340990 cni.go:84] Creating CNI manager for ""
	I0229 01:50:23.213208  340990 cni.go:136] 3 nodes found, recommending kindnet
	I0229 01:50:23.213218  340990 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 01:50:23.213237  340990 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-107035 NodeName:multinode-107035-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 01:50:23.213384  340990 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-107035-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:50:23.213438  340990 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-107035-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-107035 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:50:23.213491  340990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 01:50:23.225171  340990 command_runner.go:130] > kubeadm
	I0229 01:50:23.225198  340990 command_runner.go:130] > kubectl
	I0229 01:50:23.225202  340990 command_runner.go:130] > kubelet
	I0229 01:50:23.225390  340990 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:50:23.225447  340990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0229 01:50:23.236406  340990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0229 01:50:23.258927  340990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 01:50:23.280240  340990 ssh_runner.go:195] Run: grep 192.168.39.183	control-plane.minikube.internal$ /etc/hosts
	I0229 01:50:23.284834  340990 command_runner.go:130] > 192.168.39.183	control-plane.minikube.internal
	I0229 01:50:23.285055  340990 host.go:66] Checking if "multinode-107035" exists ...
	I0229 01:50:23.285317  340990 config.go:182] Loaded profile config "multinode-107035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:50:23.285465  340990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:50:23.285516  340990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:50:23.301063  340990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37957
	I0229 01:50:23.301538  340990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:50:23.302009  340990 main.go:141] libmachine: Using API Version  1
	I0229 01:50:23.302033  340990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:50:23.302462  340990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:50:23.302629  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:50:23.302797  340990 start.go:304] JoinCluster: &{Name:multinode-107035 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-107035 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:50:23.302926  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0229 01:50:23.302948  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:50:23.305931  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:50:23.306333  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:50:23.306360  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:50:23.306540  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:50:23.306724  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:50:23.306902  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:50:23.307044  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035/id_rsa Username:docker}
	I0229 01:50:23.475580  340990 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token v0q7cb.4u0v8k5ljre5415w --discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 
	I0229 01:50:23.475647  340990 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.26 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0229 01:50:23.475692  340990 host.go:66] Checking if "multinode-107035" exists ...
	I0229 01:50:23.476178  340990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:50:23.476244  340990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:50:23.495811  340990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44207
	I0229 01:50:23.496374  340990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:50:23.496928  340990 main.go:141] libmachine: Using API Version  1
	I0229 01:50:23.496951  340990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:50:23.497305  340990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:50:23.497518  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:50:23.497709  340990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-107035-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0229 01:50:23.497734  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:50:23.500241  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:50:23.500672  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:50:23.500703  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:50:23.500879  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:50:23.501075  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:50:23.501239  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:50:23.501365  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035/id_rsa Username:docker}
	I0229 01:50:23.659522  340990 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0229 01:50:23.727618  340990 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-g9fbr, kube-system/kube-proxy-2vt7v
	I0229 01:50:26.755235  340990 command_runner.go:130] > node/multinode-107035-m02 cordoned
	I0229 01:50:26.755272  340990 command_runner.go:130] > pod "busybox-5b5d89c9d6-gz4cd" has DeletionTimestamp older than 1 seconds, skipping
	I0229 01:50:26.755282  340990 command_runner.go:130] > node/multinode-107035-m02 drained
	I0229 01:50:26.755313  340990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-107035-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.257571384s)
	I0229 01:50:26.755339  340990 node.go:108] successfully drained node "m02"
	I0229 01:50:26.755705  340990 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:50:26.755932  340990 kapi.go:59] client config for multinode-107035: &rest.Config{Host:"https://192.168.39.183:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.key", CAFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 01:50:26.756404  340990 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0229 01:50:26.756463  340990 round_trippers.go:463] DELETE https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m02
	I0229 01:50:26.756472  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:26.756479  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:26.756485  340990 round_trippers.go:473]     Content-Type: application/json
	I0229 01:50:26.756489  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:26.770080  340990 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0229 01:50:26.770103  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:26.770112  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:26.770117  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:26.770120  340990 round_trippers.go:580]     Content-Length: 171
	I0229 01:50:26.770124  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:26 GMT
	I0229 01:50:26.770128  340990 round_trippers.go:580]     Audit-Id: ff1a34cc-1d29-49f5-851f-cf27fcc59b3e
	I0229 01:50:26.770132  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:26.770136  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:26.770175  340990 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-107035-m02","kind":"nodes","uid":"ce7e14a9-031d-40ba-b40d-27d557da3a03"}}
	I0229 01:50:26.770264  340990 node.go:124] successfully deleted node "m02"
	I0229 01:50:26.770279  340990 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.26 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0229 01:50:26.770310  340990 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.26 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0229 01:50:26.770336  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v0q7cb.4u0v8k5ljre5415w --discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-107035-m02"
	I0229 01:50:26.825232  340990 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 01:50:27.109628  340990 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0229 01:50:27.109710  340990 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0229 01:50:27.208321  340990 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:50:27.208531  340990 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:50:27.208612  340990 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 01:50:27.383007  340990 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0229 01:50:27.904774  340990 command_runner.go:130] > This node has joined the cluster:
	I0229 01:50:27.904814  340990 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0229 01:50:27.904823  340990 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0229 01:50:27.904832  340990 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0229 01:50:27.908020  340990 command_runner.go:130] ! W0229 01:50:26.816000    2613 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0229 01:50:27.908051  340990 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0229 01:50:27.908061  340990 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0229 01:50:27.908075  340990 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0229 01:50:27.908116  340990 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token v0q7cb.4u0v8k5ljre5415w --discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-107035-m02": (1.137744634s)
	I0229 01:50:27.908154  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0229 01:50:28.226245  340990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=multinode-107035 minikube.k8s.io/updated_at=2024_02_29T01_50_28_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:50:28.339441  340990 command_runner.go:130] > node/multinode-107035-m02 labeled
	I0229 01:50:28.355273  340990 command_runner.go:130] > node/multinode-107035-m03 labeled
	I0229 01:50:28.357510  340990 start.go:306] JoinCluster complete in 5.05470822s
	I0229 01:50:28.357538  340990 cni.go:84] Creating CNI manager for ""
	I0229 01:50:28.357547  340990 cni.go:136] 3 nodes found, recommending kindnet
	I0229 01:50:28.357607  340990 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 01:50:28.363740  340990 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 01:50:28.363779  340990 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 01:50:28.363798  340990 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 01:50:28.363807  340990 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 01:50:28.363816  340990 command_runner.go:130] > Access: 2024-02-29 01:48:05.172185555 +0000
	I0229 01:50:28.363824  340990 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 01:50:28.363831  340990 command_runner.go:130] > Change: 2024-02-29 01:48:03.809050024 +0000
	I0229 01:50:28.363838  340990 command_runner.go:130] >  Birth: -
	I0229 01:50:28.364061  340990 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 01:50:28.364080  340990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 01:50:28.387496  340990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 01:50:28.738426  340990 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 01:50:28.741974  340990 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 01:50:28.744565  340990 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 01:50:28.753965  340990 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 01:50:28.757012  340990 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:50:28.757262  340990 kapi.go:59] client config for multinode-107035: &rest.Config{Host:"https://192.168.39.183:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.key", CAFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 01:50:28.757692  340990 round_trippers.go:463] GET https://192.168.39.183:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 01:50:28.757707  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:28.757716  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:28.757721  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:28.759615  340990 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 01:50:28.759635  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:28.759645  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:28.759652  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:28.759660  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:28.759665  340990 round_trippers.go:580]     Content-Length: 291
	I0229 01:50:28.759669  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:28 GMT
	I0229 01:50:28.759674  340990 round_trippers.go:580]     Audit-Id: 07c94b47-31a1-4525-aa28-ca8fb5355746
	I0229 01:50:28.759677  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:28.759697  340990 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"886475f9-4800-446f-81db-efbd75717fab","resourceVersion":"838","creationTimestamp":"2024-02-29T01:38:23Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 01:50:28.759793  340990 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-107035" context rescaled to 1 replicas
	I0229 01:50:28.759828  340990 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.26 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0229 01:50:28.762038  340990 out.go:177] * Verifying Kubernetes components...
	I0229 01:50:28.763142  340990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:50:28.779104  340990 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:50:28.779415  340990 kapi.go:59] client config for multinode-107035: &rest.Config{Host:"https://192.168.39.183:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.key", CAFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 01:50:28.779716  340990 node_ready.go:35] waiting up to 6m0s for node "multinode-107035-m02" to be "Ready" ...
	I0229 01:50:28.779798  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m02
	I0229 01:50:28.779810  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:28.779820  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:28.779826  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:28.782218  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:50:28.782251  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:28.782261  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:28.782266  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:28.782270  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:28.782275  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:28 GMT
	I0229 01:50:28.782279  340990 round_trippers.go:580]     Audit-Id: 6a14ac78-3de4-4a2d-b578-2fbe12dee243
	I0229 01:50:28.782284  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:28.782635  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035-m02","uid":"141aef2e-52e8-4d4b-87d6-36291e7a5ea8","resourceVersion":"995","creationTimestamp":"2024-02-29T01:50:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T01_50_28_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:50:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3992 chars]
	I0229 01:50:28.782959  340990 node_ready.go:49] node "multinode-107035-m02" has status "Ready":"True"
	I0229 01:50:28.782976  340990 node_ready.go:38] duration metric: took 3.242292ms waiting for node "multinode-107035-m02" to be "Ready" ...
	I0229 01:50:28.782985  340990 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:50:28.783040  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I0229 01:50:28.783048  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:28.783054  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:28.783059  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:28.786972  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:50:28.786986  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:28.786992  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:28.786996  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:28 GMT
	I0229 01:50:28.786999  340990 round_trippers.go:580]     Audit-Id: 8d199c82-1fcc-4f1f-9ea0-741b005ab08d
	I0229 01:50:28.787005  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:28.787008  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:28.787013  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:28.787983  340990 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1003"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"820","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81675 chars]
	I0229 01:50:28.790337  340990 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:28.790425  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5fqf2
	I0229 01:50:28.790433  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:28.790440  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:28.790444  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:28.793879  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:50:28.793894  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:28.793899  340990 round_trippers.go:580]     Audit-Id: 125ee8f4-1bc6-4d22-9765-5537d7fe9db3
	I0229 01:50:28.793902  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:28.793908  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:28.793910  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:28.793914  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:28.793922  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:28 GMT
	I0229 01:50:28.794126  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"820","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6226 chars]
	I0229 01:50:28.794527  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:50:28.794541  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:28.794548  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:28.794551  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:28.796935  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:50:28.796949  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:28.796954  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:28.796959  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:28 GMT
	I0229 01:50:28.796961  340990 round_trippers.go:580]     Audit-Id: 92a92a86-ed01-4eb4-be21-fb280effd716
	I0229 01:50:28.796964  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:28.796967  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:28.796970  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:28.797130  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 01:50:28.797427  340990 pod_ready.go:92] pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace has status "Ready":"True"
	I0229 01:50:28.797446  340990 pod_ready.go:81] duration metric: took 7.091708ms waiting for pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:28.797454  340990 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:28.797507  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107035
	I0229 01:50:28.797515  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:28.797522  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:28.797529  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:28.801163  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:50:28.801178  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:28.801185  340990 round_trippers.go:580]     Audit-Id: 3ca5a306-3aa6-4359-a34b-51087883c516
	I0229 01:50:28.801188  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:28.801191  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:28.801195  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:28.801197  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:28.801199  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:28 GMT
	I0229 01:50:28.802115  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107035","namespace":"kube-system","uid":"65255c97-af0a-4233-b308-e46dfd75a9f9","resourceVersion":"841","creationTimestamp":"2024-02-29T01:38:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.183:2379","kubernetes.io/config.hash":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.mirror":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.seen":"2024-02-29T01:38:16.621157329Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5825 chars]
	I0229 01:50:28.802480  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:50:28.802496  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:28.802506  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:28.802511  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:28.807524  340990 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 01:50:28.807542  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:28.807548  340990 round_trippers.go:580]     Audit-Id: e5dd0aa5-0ef7-42ae-8190-7166589fe575
	I0229 01:50:28.807553  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:28.807555  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:28.807558  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:28.807561  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:28.807564  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:28 GMT
	I0229 01:50:28.808225  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 01:50:28.808549  340990 pod_ready.go:92] pod "etcd-multinode-107035" in "kube-system" namespace has status "Ready":"True"
	I0229 01:50:28.808566  340990 pod_ready.go:81] duration metric: took 11.106682ms waiting for pod "etcd-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:28.808581  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:28.808634  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-107035
	I0229 01:50:28.808643  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:28.808649  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:28.808654  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:28.811336  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:50:28.811354  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:28.811363  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:28.811377  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:28 GMT
	I0229 01:50:28.811382  340990 round_trippers.go:580]     Audit-Id: 08b4d4c9-a364-454d-aeb6-bb1babcba20d
	I0229 01:50:28.811385  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:28.811391  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:28.811399  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:28.812090  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-107035","namespace":"kube-system","uid":"c8a5ad6e-c2cc-49a4-8837-ba1b280f87af","resourceVersion":"839","creationTimestamp":"2024-02-29T01:38:23Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.183:8443","kubernetes.io/config.hash":"f8e3f19840dda0faee1ad3a91ae482c1","kubernetes.io/config.mirror":"f8e3f19840dda0faee1ad3a91ae482c1","kubernetes.io/config.seen":"2024-02-29T01:38:16.621158531Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7351 chars]
	I0229 01:50:28.812615  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:50:28.812637  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:28.812647  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:28.812654  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:28.815035  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:50:28.815052  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:28.815060  340990 round_trippers.go:580]     Audit-Id: e31543c6-05da-4868-a2c6-0a3740e32343
	I0229 01:50:28.815067  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:28.815071  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:28.815075  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:28.815078  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:28.815082  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:28 GMT
	I0229 01:50:28.815379  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 01:50:28.815777  340990 pod_ready.go:92] pod "kube-apiserver-multinode-107035" in "kube-system" namespace has status "Ready":"True"
	I0229 01:50:28.815801  340990 pod_ready.go:81] duration metric: took 7.211298ms waiting for pod "kube-apiserver-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:28.815813  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:28.815883  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-107035
	I0229 01:50:28.815896  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:28.815906  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:28.815915  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:28.817859  340990 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 01:50:28.817878  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:28.817887  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:28.817891  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:28 GMT
	I0229 01:50:28.817895  340990 round_trippers.go:580]     Audit-Id: f836dde2-6ee8-49b9-9294-792e9759b95b
	I0229 01:50:28.817899  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:28.817903  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:28.817907  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:28.818166  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-107035","namespace":"kube-system","uid":"cc34d9e0-d4bd-4fac-8c94-6ead8a744abc","resourceVersion":"834","creationTimestamp":"2024-02-29T01:38:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d885436ac2f1544135b29b38fb6816fc","kubernetes.io/config.mirror":"d885436ac2f1544135b29b38fb6816fc","kubernetes.io/config.seen":"2024-02-29T01:38:23.684826383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6907 chars]
	I0229 01:50:28.818561  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:50:28.818574  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:28.818581  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:28.818585  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:28.820944  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:50:28.820960  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:28.820978  340990 round_trippers.go:580]     Audit-Id: e8e21865-5ba9-4459-9c73-046578735105
	I0229 01:50:28.820984  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:28.820987  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:28.820992  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:28.820996  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:28.821000  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:28 GMT
	I0229 01:50:28.821154  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 01:50:28.821550  340990 pod_ready.go:92] pod "kube-controller-manager-multinode-107035" in "kube-system" namespace has status "Ready":"True"
	I0229 01:50:28.821573  340990 pod_ready.go:81] duration metric: took 5.743565ms waiting for pod "kube-controller-manager-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:28.821581  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2vt7v" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:28.979903  340990 request.go:629] Waited for 158.239068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vt7v
	I0229 01:50:28.979969  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vt7v
	I0229 01:50:28.979974  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:28.979982  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:28.979985  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:28.983258  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:50:28.983281  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:28.983289  340990 round_trippers.go:580]     Audit-Id: c135bd68-b468-43f1-960a-d4902376a638
	I0229 01:50:28.983293  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:28.983297  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:28.983300  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:28.983303  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:28.983307  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:28 GMT
	I0229 01:50:28.984179  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2vt7v","generateName":"kube-proxy-","namespace":"kube-system","uid":"eaa78334-8191-47e9-b001-343c90a87460","resourceVersion":"1001","creationTimestamp":"2024-02-29T01:39:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02144a11-c41b-4c40-be0e-44f538bad496","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:39:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02144a11-c41b-4c40-be0e-44f538bad496\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5693 chars]
	I0229 01:50:29.179975  340990 request.go:629] Waited for 195.31495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m02
	I0229 01:50:29.180044  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m02
	I0229 01:50:29.180049  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:29.180057  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:29.180063  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:29.182588  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:50:29.182606  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:29.182612  340990 round_trippers.go:580]     Audit-Id: db2a7f99-fcb6-424c-ab83-f94efa6775cd
	I0229 01:50:29.182616  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:29.182619  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:29.182621  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:29.182624  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:29.182626  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:29 GMT
	I0229 01:50:29.182805  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035-m02","uid":"141aef2e-52e8-4d4b-87d6-36291e7a5ea8","resourceVersion":"995","creationTimestamp":"2024-02-29T01:50:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T01_50_28_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:50:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{ [truncated 3992 chars]
	I0229 01:50:29.183070  340990 pod_ready.go:92] pod "kube-proxy-2vt7v" in "kube-system" namespace has status "Ready":"True"
	I0229 01:50:29.183085  340990 pod_ready.go:81] duration metric: took 361.498016ms waiting for pod "kube-proxy-2vt7v" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:29.183095  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7vhtd" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:29.380672  340990 request.go:629] Waited for 197.50104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7vhtd
	I0229 01:50:29.380743  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7vhtd
	I0229 01:50:29.380749  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:29.380757  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:29.380761  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:29.384299  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:50:29.384328  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:29.384338  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:29 GMT
	I0229 01:50:29.384344  340990 round_trippers.go:580]     Audit-Id: cdb66af9-1f89-4f6e-8d19-0833e842d801
	I0229 01:50:29.384350  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:29.384358  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:29.384362  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:29.384366  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:29.384616  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7vhtd","generateName":"kube-proxy-","namespace":"kube-system","uid":"1a552ea7-1d99-46ec-99e1-30ad4ac72ca8","resourceVersion":"775","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02144a11-c41b-4c40-be0e-44f538bad496","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02144a11-c41b-4c40-be0e-44f538bad496\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5484 chars]
	I0229 01:50:29.579924  340990 request.go:629] Waited for 194.662686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:50:29.580016  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:50:29.580028  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:29.580039  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:29.580058  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:29.585834  340990 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 01:50:29.585864  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:29.585874  340990 round_trippers.go:580]     Audit-Id: 766bd9e5-607f-415d-8a14-98dc7edc8654
	I0229 01:50:29.585897  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:29.585903  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:29.585909  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:29.585912  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:29.585917  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:29 GMT
	I0229 01:50:29.586088  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 01:50:29.586515  340990 pod_ready.go:92] pod "kube-proxy-7vhtd" in "kube-system" namespace has status "Ready":"True"
	I0229 01:50:29.586540  340990 pod_ready.go:81] duration metric: took 403.436752ms waiting for pod "kube-proxy-7vhtd" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:29.586556  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fhzft" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:29.779870  340990 request.go:629] Waited for 193.21839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fhzft
	I0229 01:50:29.779948  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fhzft
	I0229 01:50:29.779955  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:29.779965  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:29.779973  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:29.782691  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:50:29.782719  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:29.782729  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:29 GMT
	I0229 01:50:29.782736  340990 round_trippers.go:580]     Audit-Id: c631990b-16dd-4dbb-b458-cbf5f433c9b7
	I0229 01:50:29.782740  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:29.782744  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:29.782750  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:29.782754  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:29.783061  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fhzft","generateName":"kube-proxy-","namespace":"kube-system","uid":"3b05cd87-92a9-4c59-879a-d42c3a08c7d4","resourceVersion":"669","creationTimestamp":"2024-02-29T01:40:04Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02144a11-c41b-4c40-be0e-44f538bad496","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:40:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02144a11-c41b-4c40-be0e-44f538bad496\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5492 chars]
	I0229 01:50:29.979863  340990 request.go:629] Waited for 196.309627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m03
	I0229 01:50:29.979945  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m03
	I0229 01:50:29.979952  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:29.979962  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:29.979970  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:29.982505  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:50:29.982545  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:29.982556  340990 round_trippers.go:580]     Audit-Id: 3336d422-550f-46d7-b44b-88c7e33de9e7
	I0229 01:50:29.982563  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:29.982568  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:29.982574  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:29.982579  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:29.982584  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:29 GMT
	I0229 01:50:29.983019  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035-m03","uid":"7068367c-f5dd-4a1d-bba4-904a860289cd","resourceVersion":"996","creationTimestamp":"2024-02-29T01:40:43Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T01_50_28_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0229 01:50:29.983347  340990 pod_ready.go:92] pod "kube-proxy-fhzft" in "kube-system" namespace has status "Ready":"True"
	I0229 01:50:29.983368  340990 pod_ready.go:81] duration metric: took 396.80311ms waiting for pod "kube-proxy-fhzft" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:29.983382  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:30.180563  340990 request.go:629] Waited for 197.095107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107035
	I0229 01:50:30.180674  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107035
	I0229 01:50:30.180683  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:30.180694  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:30.180702  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:30.183201  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:50:30.183221  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:30.183228  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:30.183232  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:30.183236  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:30 GMT
	I0229 01:50:30.183240  340990 round_trippers.go:580]     Audit-Id: 74b5ad36-64f5-4f69-9116-a49ca2d51c3f
	I0229 01:50:30.183248  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:30.183252  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:30.183481  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-107035","namespace":"kube-system","uid":"ac9bc04a-dac0-40f5-b928-4cacd028df82","resourceVersion":"840","creationTimestamp":"2024-02-29T01:38:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ef2538a195901383d6f1be68d27ee2ba","kubernetes.io/config.mirror":"ef2538a195901383d6f1be68d27ee2ba","kubernetes.io/config.seen":"2024-02-29T01:38:23.684827179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4646 chars]
	I0229 01:50:30.380331  340990 request.go:629] Waited for 196.417783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:50:30.380418  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:50:30.380424  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:30.380431  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:30.380441  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:30.383487  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:50:30.383507  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:30.383514  340990 round_trippers.go:580]     Audit-Id: 80eeaebf-3301-4fb2-ba2f-d5c612c8dfa6
	I0229 01:50:30.383517  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:30.383521  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:30.383525  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:30.383529  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:30.383531  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:30 GMT
	I0229 01:50:30.383973  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 01:50:30.384308  340990 pod_ready.go:92] pod "kube-scheduler-multinode-107035" in "kube-system" namespace has status "Ready":"True"
	I0229 01:50:30.384327  340990 pod_ready.go:81] duration metric: took 400.93733ms waiting for pod "kube-scheduler-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:50:30.384337  340990 pod_ready.go:38] duration metric: took 1.601340612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:50:30.384355  340990 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 01:50:30.384412  340990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:50:30.400831  340990 system_svc.go:56] duration metric: took 16.46719ms WaitForService to wait for kubelet.
	I0229 01:50:30.400936  340990 kubeadm.go:581] duration metric: took 1.64106403s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 01:50:30.400986  340990 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:50:30.580270  340990 request.go:629] Waited for 179.198812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I0229 01:50:30.580352  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I0229 01:50:30.580357  340990 round_trippers.go:469] Request Headers:
	I0229 01:50:30.580365  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:50:30.580371  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:50:30.583658  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:50:30.583680  340990 round_trippers.go:577] Response Headers:
	I0229 01:50:30.583687  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:50:30.583690  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:50:30.583693  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:50:30 GMT
	I0229 01:50:30.583696  340990 round_trippers.go:580]     Audit-Id: 9a4d9f64-40b8-4ab3-91ca-79a748eea8eb
	I0229 01:50:30.583698  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:50:30.583701  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:50:30.584221  340990 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1012"},"items":[{"metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16491 chars]
	I0229 01:50:30.584810  340990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:50:30.584827  340990 node_conditions.go:123] node cpu capacity is 2
	I0229 01:50:30.584838  340990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:50:30.584841  340990 node_conditions.go:123] node cpu capacity is 2
	I0229 01:50:30.584845  340990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:50:30.584848  340990 node_conditions.go:123] node cpu capacity is 2
	I0229 01:50:30.584852  340990 node_conditions.go:105] duration metric: took 183.857342ms to run NodePressure ...
	I0229 01:50:30.584865  340990 start.go:228] waiting for startup goroutines ...
	I0229 01:50:30.584885  340990 start.go:242] writing updated cluster config ...
	I0229 01:50:30.585310  340990 config.go:182] Loaded profile config "multinode-107035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:50:30.585393  340990 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/config.json ...
	I0229 01:50:30.587581  340990 out.go:177] * Starting worker node multinode-107035-m03 in cluster multinode-107035
	I0229 01:50:30.588828  340990 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 01:50:30.588857  340990 cache.go:56] Caching tarball of preloaded images
	I0229 01:50:30.588975  340990 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 01:50:30.588991  340990 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 01:50:30.589090  340990 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/config.json ...
	I0229 01:50:30.589257  340990 start.go:365] acquiring machines lock for multinode-107035-m03: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:50:30.589325  340990 start.go:369] acquired machines lock for "multinode-107035-m03" in 29.309µs
	I0229 01:50:30.589345  340990 start.go:96] Skipping create...Using existing machine configuration
	I0229 01:50:30.589355  340990 fix.go:54] fixHost starting: m03
	I0229 01:50:30.589603  340990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:50:30.589641  340990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:50:30.604652  340990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35831
	I0229 01:50:30.605070  340990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:50:30.605622  340990 main.go:141] libmachine: Using API Version  1
	I0229 01:50:30.605644  340990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:50:30.606031  340990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:50:30.606216  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .DriverName
	I0229 01:50:30.606408  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetState
	I0229 01:50:30.607902  340990 fix.go:102] recreateIfNeeded on multinode-107035-m03: state=Running err=<nil>
	W0229 01:50:30.607919  340990 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 01:50:30.609566  340990 out.go:177] * Updating the running kvm2 "multinode-107035-m03" VM ...
	I0229 01:50:30.610935  340990 machine.go:88] provisioning docker machine ...
	I0229 01:50:30.610959  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .DriverName
	I0229 01:50:30.611186  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetMachineName
	I0229 01:50:30.611373  340990 buildroot.go:166] provisioning hostname "multinode-107035-m03"
	I0229 01:50:30.611391  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetMachineName
	I0229 01:50:30.611552  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHHostname
	I0229 01:50:30.614432  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:50:30.614900  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0c:94", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:40:37 +0000 UTC Type:0 Mac:52:54:00:72:0c:94 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-107035-m03 Clientid:01:52:54:00:72:0c:94}
	I0229 01:50:30.614929  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:50:30.615092  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHPort
	I0229 01:50:30.615326  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHKeyPath
	I0229 01:50:30.615488  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHKeyPath
	I0229 01:50:30.615639  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHUsername
	I0229 01:50:30.615787  340990 main.go:141] libmachine: Using SSH client type: native
	I0229 01:50:30.615952  340990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0229 01:50:30.615964  340990 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-107035-m03 && echo "multinode-107035-m03" | sudo tee /etc/hostname
	I0229 01:50:30.748124  340990 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-107035-m03
	
	I0229 01:50:30.748159  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHHostname
	I0229 01:50:30.751063  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:50:30.751381  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0c:94", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:40:37 +0000 UTC Type:0 Mac:52:54:00:72:0c:94 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-107035-m03 Clientid:01:52:54:00:72:0c:94}
	I0229 01:50:30.751415  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:50:30.751621  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHPort
	I0229 01:50:30.751838  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHKeyPath
	I0229 01:50:30.752026  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHKeyPath
	I0229 01:50:30.752220  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHUsername
	I0229 01:50:30.752419  340990 main.go:141] libmachine: Using SSH client type: native
	I0229 01:50:30.752713  340990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0229 01:50:30.752743  340990 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-107035-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-107035-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-107035-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:50:30.867636  340990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:50:30.867670  340990 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 01:50:30.867692  340990 buildroot.go:174] setting up certificates
	I0229 01:50:30.867704  340990 provision.go:83] configureAuth start
	I0229 01:50:30.867713  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetMachineName
	I0229 01:50:30.868059  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetIP
	I0229 01:50:30.870429  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:50:30.870888  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0c:94", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:40:37 +0000 UTC Type:0 Mac:52:54:00:72:0c:94 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-107035-m03 Clientid:01:52:54:00:72:0c:94}
	I0229 01:50:30.870909  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:50:30.871072  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHHostname
	I0229 01:50:30.873284  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:50:30.873660  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0c:94", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:40:37 +0000 UTC Type:0 Mac:52:54:00:72:0c:94 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-107035-m03 Clientid:01:52:54:00:72:0c:94}
	I0229 01:50:30.873730  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:50:30.873817  340990 provision.go:138] copyHostCerts
	I0229 01:50:30.873852  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 01:50:30.873896  340990 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 01:50:30.873906  340990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 01:50:30.873979  340990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 01:50:30.874063  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 01:50:30.874081  340990 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 01:50:30.874088  340990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 01:50:30.874113  340990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 01:50:30.874153  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 01:50:30.874171  340990 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 01:50:30.874177  340990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 01:50:30.874197  340990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 01:50:30.874270  340990 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.multinode-107035-m03 san=[192.168.39.121 192.168.39.121 localhost 127.0.0.1 minikube multinode-107035-m03]
	I0229 01:50:31.101663  340990 provision.go:172] copyRemoteCerts
	I0229 01:50:31.101723  340990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:50:31.101751  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHHostname
	I0229 01:50:31.104553  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:50:31.104968  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0c:94", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:40:37 +0000 UTC Type:0 Mac:52:54:00:72:0c:94 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-107035-m03 Clientid:01:52:54:00:72:0c:94}
	I0229 01:50:31.105002  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:50:31.105151  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHPort
	I0229 01:50:31.105381  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHKeyPath
	I0229 01:50:31.105559  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHUsername
	I0229 01:50:31.105700  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035-m03/id_rsa Username:docker}
	I0229 01:50:31.190994  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 01:50:31.191065  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 01:50:31.220422  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 01:50:31.220491  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0229 01:50:31.247325  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 01:50:31.247407  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 01:50:31.273275  340990 provision.go:86] duration metric: configureAuth took 405.557514ms
	I0229 01:50:31.273306  340990 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:50:31.273592  340990 config.go:182] Loaded profile config "multinode-107035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:50:31.273725  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHHostname
	I0229 01:50:31.276495  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:50:31.276894  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0c:94", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:40:37 +0000 UTC Type:0 Mac:52:54:00:72:0c:94 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-107035-m03 Clientid:01:52:54:00:72:0c:94}
	I0229 01:50:31.276928  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:50:31.277124  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHPort
	I0229 01:50:31.277333  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHKeyPath
	I0229 01:50:31.277492  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHKeyPath
	I0229 01:50:31.277653  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHUsername
	I0229 01:50:31.277835  340990 main.go:141] libmachine: Using SSH client type: native
	I0229 01:50:31.278002  340990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0229 01:50:31.278016  340990 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 01:52:01.721678  340990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 01:52:01.721725  340990 machine.go:91] provisioned docker machine in 1m31.110770298s
	I0229 01:52:01.721743  340990 start.go:300] post-start starting for "multinode-107035-m03" (driver="kvm2")
	I0229 01:52:01.721789  340990 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:52:01.721819  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .DriverName
	I0229 01:52:01.722267  340990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:52:01.722307  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHHostname
	I0229 01:52:01.725330  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:52:01.725747  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0c:94", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:40:37 +0000 UTC Type:0 Mac:52:54:00:72:0c:94 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-107035-m03 Clientid:01:52:54:00:72:0c:94}
	I0229 01:52:01.725802  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:52:01.725973  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHPort
	I0229 01:52:01.726187  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHKeyPath
	I0229 01:52:01.726348  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHUsername
	I0229 01:52:01.726521  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035-m03/id_rsa Username:docker}
	I0229 01:52:01.814533  340990 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:52:01.819636  340990 command_runner.go:130] > NAME=Buildroot
	I0229 01:52:01.819663  340990 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 01:52:01.819671  340990 command_runner.go:130] > ID=buildroot
	I0229 01:52:01.819678  340990 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 01:52:01.819685  340990 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 01:52:01.819784  340990 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:52:01.819803  340990 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 01:52:01.819889  340990 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 01:52:01.819960  340990 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 01:52:01.819970  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> /etc/ssl/certs/3238852.pem
	I0229 01:52:01.820053  340990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 01:52:01.830611  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 01:52:01.861518  340990 start.go:303] post-start completed in 139.756999ms
	I0229 01:52:01.861549  340990 fix.go:56] fixHost completed within 1m31.272193257s
	I0229 01:52:01.861581  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHHostname
	I0229 01:52:01.864407  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:52:01.864737  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0c:94", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:40:37 +0000 UTC Type:0 Mac:52:54:00:72:0c:94 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-107035-m03 Clientid:01:52:54:00:72:0c:94}
	I0229 01:52:01.864770  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:52:01.864934  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHPort
	I0229 01:52:01.865159  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHKeyPath
	I0229 01:52:01.865361  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHKeyPath
	I0229 01:52:01.865513  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHUsername
	I0229 01:52:01.865665  340990 main.go:141] libmachine: Using SSH client type: native
	I0229 01:52:01.865845  340990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0229 01:52:01.865856  340990 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 01:52:01.975907  340990 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709171521.967304379
	
	I0229 01:52:01.975943  340990 fix.go:206] guest clock: 1709171521.967304379
	I0229 01:52:01.975954  340990 fix.go:219] Guest: 2024-02-29 01:52:01.967304379 +0000 UTC Remote: 2024-02-29 01:52:01.861555047 +0000 UTC m=+547.587450102 (delta=105.749332ms)
	I0229 01:52:01.975976  340990 fix.go:190] guest clock delta is within tolerance: 105.749332ms
	I0229 01:52:01.975983  340990 start.go:83] releasing machines lock for "multinode-107035-m03", held for 1m31.38664605s
	I0229 01:52:01.976010  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .DriverName
	I0229 01:52:01.976328  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetIP
	I0229 01:52:01.979189  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:52:01.979480  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0c:94", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:40:37 +0000 UTC Type:0 Mac:52:54:00:72:0c:94 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-107035-m03 Clientid:01:52:54:00:72:0c:94}
	I0229 01:52:01.979502  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:52:01.981515  340990 out.go:177] * Found network options:
	I0229 01:52:01.982860  340990 out.go:177]   - NO_PROXY=192.168.39.183,192.168.39.26
	W0229 01:52:01.984021  340990 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 01:52:01.984043  340990 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 01:52:01.984058  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .DriverName
	I0229 01:52:01.984678  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .DriverName
	I0229 01:52:01.984882  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .DriverName
	I0229 01:52:01.985015  340990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:52:01.985068  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHHostname
	W0229 01:52:01.985104  340990 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 01:52:01.985133  340990 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 01:52:01.985216  340990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 01:52:01.985241  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHHostname
	I0229 01:52:01.987534  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:52:01.987804  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:52:01.987961  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0c:94", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:40:37 +0000 UTC Type:0 Mac:52:54:00:72:0c:94 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-107035-m03 Clientid:01:52:54:00:72:0c:94}
	I0229 01:52:01.987994  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:52:01.988154  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHPort
	I0229 01:52:01.988266  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0c:94", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:40:37 +0000 UTC Type:0 Mac:52:54:00:72:0c:94 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-107035-m03 Clientid:01:52:54:00:72:0c:94}
	I0229 01:52:01.988290  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:52:01.988356  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHKeyPath
	I0229 01:52:01.988491  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHPort
	I0229 01:52:01.988577  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHUsername
	I0229 01:52:01.988678  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHKeyPath
	I0229 01:52:01.988729  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035-m03/id_rsa Username:docker}
	I0229 01:52:01.988817  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetSSHUsername
	I0229 01:52:01.988972  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035-m03/id_rsa Username:docker}
	I0229 01:52:02.231601  340990 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 01:52:02.231658  340990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 01:52:02.238606  340990 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 01:52:02.238793  340990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:52:02.238854  340990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 01:52:02.252005  340990 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 01:52:02.252040  340990 start.go:475] detecting cgroup driver to use...
	I0229 01:52:02.252130  340990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:52:02.275487  340990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:52:02.293612  340990 docker.go:217] disabling cri-docker service (if available) ...
	I0229 01:52:02.293677  340990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 01:52:02.312531  340990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 01:52:02.328644  340990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 01:52:02.461194  340990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 01:52:02.596450  340990 docker.go:233] disabling docker service ...
	I0229 01:52:02.596583  340990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 01:52:02.616770  340990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 01:52:02.633576  340990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 01:52:02.777461  340990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 01:52:02.925164  340990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 01:52:02.942477  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:52:02.966648  340990 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0229 01:52:02.966701  340990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 01:52:02.966764  340990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:52:02.980119  340990 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 01:52:02.980199  340990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:52:02.992501  340990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:52:03.005759  340990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 01:52:03.018103  340990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:52:03.031700  340990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:52:03.042708  340990 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 01:52:03.042808  340990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:52:03.054048  340990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:52:03.183437  340990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 01:52:03.832001  340990 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 01:52:03.832115  340990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 01:52:03.837663  340990 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0229 01:52:03.837696  340990 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 01:52:03.837705  340990 command_runner.go:130] > Device: 0,22	Inode: 1130        Links: 1
	I0229 01:52:03.837715  340990 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 01:52:03.837722  340990 command_runner.go:130] > Access: 2024-02-29 01:52:03.774766073 +0000
	I0229 01:52:03.837731  340990 command_runner.go:130] > Modify: 2024-02-29 01:52:03.774766073 +0000
	I0229 01:52:03.837739  340990 command_runner.go:130] > Change: 2024-02-29 01:52:03.774766073 +0000
	I0229 01:52:03.837745  340990 command_runner.go:130] >  Birth: -
	I0229 01:52:03.837920  340990 start.go:543] Will wait 60s for crictl version
	I0229 01:52:03.837978  340990 ssh_runner.go:195] Run: which crictl
	I0229 01:52:03.842835  340990 command_runner.go:130] > /usr/bin/crictl
	I0229 01:52:03.842892  340990 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 01:52:03.889500  340990 command_runner.go:130] > Version:  0.1.0
	I0229 01:52:03.889524  340990 command_runner.go:130] > RuntimeName:  cri-o
	I0229 01:52:03.889528  340990 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0229 01:52:03.889534  340990 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 01:52:03.890742  340990 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 01:52:03.890829  340990 ssh_runner.go:195] Run: crio --version
	I0229 01:52:03.921191  340990 command_runner.go:130] > crio version 1.29.1
	I0229 01:52:03.921213  340990 command_runner.go:130] > Version:        1.29.1
	I0229 01:52:03.921219  340990 command_runner.go:130] > GitCommit:      unknown
	I0229 01:52:03.921223  340990 command_runner.go:130] > GitCommitDate:  unknown
	I0229 01:52:03.921226  340990 command_runner.go:130] > GitTreeState:   clean
	I0229 01:52:03.921232  340990 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0229 01:52:03.921236  340990 command_runner.go:130] > GoVersion:      go1.21.6
	I0229 01:52:03.921240  340990 command_runner.go:130] > Compiler:       gc
	I0229 01:52:03.921248  340990 command_runner.go:130] > Platform:       linux/amd64
	I0229 01:52:03.921252  340990 command_runner.go:130] > Linkmode:       dynamic
	I0229 01:52:03.921257  340990 command_runner.go:130] > BuildTags:      
	I0229 01:52:03.921261  340990 command_runner.go:130] >   containers_image_ostree_stub
	I0229 01:52:03.921265  340990 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0229 01:52:03.921269  340990 command_runner.go:130] >   btrfs_noversion
	I0229 01:52:03.921273  340990 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0229 01:52:03.921277  340990 command_runner.go:130] >   libdm_no_deferred_remove
	I0229 01:52:03.921280  340990 command_runner.go:130] >   seccomp
	I0229 01:52:03.921284  340990 command_runner.go:130] > LDFlags:          unknown
	I0229 01:52:03.921289  340990 command_runner.go:130] > SeccompEnabled:   true
	I0229 01:52:03.921293  340990 command_runner.go:130] > AppArmorEnabled:  false
	I0229 01:52:03.921368  340990 ssh_runner.go:195] Run: crio --version
	I0229 01:52:03.951638  340990 command_runner.go:130] > crio version 1.29.1
	I0229 01:52:03.951666  340990 command_runner.go:130] > Version:        1.29.1
	I0229 01:52:03.951672  340990 command_runner.go:130] > GitCommit:      unknown
	I0229 01:52:03.951676  340990 command_runner.go:130] > GitCommitDate:  unknown
	I0229 01:52:03.951680  340990 command_runner.go:130] > GitTreeState:   clean
	I0229 01:52:03.951686  340990 command_runner.go:130] > BuildDate:      2024-02-23T03:27:48Z
	I0229 01:52:03.951690  340990 command_runner.go:130] > GoVersion:      go1.21.6
	I0229 01:52:03.951693  340990 command_runner.go:130] > Compiler:       gc
	I0229 01:52:03.951698  340990 command_runner.go:130] > Platform:       linux/amd64
	I0229 01:52:03.951701  340990 command_runner.go:130] > Linkmode:       dynamic
	I0229 01:52:03.951706  340990 command_runner.go:130] > BuildTags:      
	I0229 01:52:03.951710  340990 command_runner.go:130] >   containers_image_ostree_stub
	I0229 01:52:03.951715  340990 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0229 01:52:03.951718  340990 command_runner.go:130] >   btrfs_noversion
	I0229 01:52:03.951723  340990 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0229 01:52:03.951731  340990 command_runner.go:130] >   libdm_no_deferred_remove
	I0229 01:52:03.951735  340990 command_runner.go:130] >   seccomp
	I0229 01:52:03.951739  340990 command_runner.go:130] > LDFlags:          unknown
	I0229 01:52:03.951743  340990 command_runner.go:130] > SeccompEnabled:   true
	I0229 01:52:03.951747  340990 command_runner.go:130] > AppArmorEnabled:  false
	I0229 01:52:03.954992  340990 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 01:52:03.956339  340990 out.go:177]   - env NO_PROXY=192.168.39.183
	I0229 01:52:03.957645  340990 out.go:177]   - env NO_PROXY=192.168.39.183,192.168.39.26
	I0229 01:52:03.958898  340990 main.go:141] libmachine: (multinode-107035-m03) Calling .GetIP
	I0229 01:52:03.961754  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:52:03.962219  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:0c:94", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:40:37 +0000 UTC Type:0 Mac:52:54:00:72:0c:94 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-107035-m03 Clientid:01:52:54:00:72:0c:94}
	I0229 01:52:03.962274  340990 main.go:141] libmachine: (multinode-107035-m03) DBG | domain multinode-107035-m03 has defined IP address 192.168.39.121 and MAC address 52:54:00:72:0c:94 in network mk-multinode-107035
	I0229 01:52:03.962472  340990 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 01:52:03.967749  340990 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0229 01:52:03.967959  340990 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035 for IP: 192.168.39.121
	I0229 01:52:03.967995  340990 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:52:03.968178  340990 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 01:52:03.968215  340990 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 01:52:03.968229  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 01:52:03.968243  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 01:52:03.968254  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 01:52:03.968266  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 01:52:03.968321  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 01:52:03.968350  340990 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 01:52:03.968361  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 01:52:03.968392  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 01:52:03.968418  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:52:03.968445  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 01:52:03.968484  340990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 01:52:03.968510  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:52:03.968525  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem -> /usr/share/ca-certificates/323885.pem
	I0229 01:52:03.968534  340990 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> /usr/share/ca-certificates/3238852.pem
	I0229 01:52:03.969007  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:52:03.999246  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 01:52:04.027198  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:52:04.054563  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 01:52:04.083278  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:52:04.111346  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 01:52:04.138771  340990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 01:52:04.167334  340990 ssh_runner.go:195] Run: openssl version
	I0229 01:52:04.173896  340990 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 01:52:04.174307  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:52:04.186890  340990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:52:04.192273  340990 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:52:04.192309  340990 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:52:04.192447  340990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:52:04.199023  340990 command_runner.go:130] > b5213941
	I0229 01:52:04.199111  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:52:04.210111  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 01:52:04.222002  340990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 01:52:04.226943  340990 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 01:52:04.227205  340990 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 01:52:04.227278  340990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 01:52:04.233924  340990 command_runner.go:130] > 51391683
	I0229 01:52:04.234046  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 01:52:04.244821  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 01:52:04.257923  340990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 01:52:04.263012  340990 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 01:52:04.263220  340990 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 01:52:04.263279  340990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 01:52:04.269557  340990 command_runner.go:130] > 3ec20f2e
	I0229 01:52:04.269745  340990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 01:52:04.279968  340990 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:52:04.284726  340990 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 01:52:04.284764  340990 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 01:52:04.284837  340990 ssh_runner.go:195] Run: crio config
	I0229 01:52:04.336929  340990 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0229 01:52:04.336965  340990 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0229 01:52:04.336979  340990 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0229 01:52:04.336986  340990 command_runner.go:130] > #
	I0229 01:52:04.336998  340990 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0229 01:52:04.337008  340990 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0229 01:52:04.337017  340990 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0229 01:52:04.337028  340990 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0229 01:52:04.337034  340990 command_runner.go:130] > # reload'.
	I0229 01:52:04.337047  340990 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0229 01:52:04.337057  340990 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0229 01:52:04.337070  340990 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0229 01:52:04.337078  340990 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0229 01:52:04.337085  340990 command_runner.go:130] > [crio]
	I0229 01:52:04.337093  340990 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0229 01:52:04.337102  340990 command_runner.go:130] > # containers images, in this directory.
	I0229 01:52:04.337115  340990 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0229 01:52:04.337175  340990 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0229 01:52:04.337193  340990 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0229 01:52:04.337205  340990 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0229 01:52:04.337214  340990 command_runner.go:130] > # imagestore = ""
	I0229 01:52:04.337224  340990 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0229 01:52:04.337237  340990 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0229 01:52:04.337247  340990 command_runner.go:130] > storage_driver = "overlay"
	I0229 01:52:04.337256  340990 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0229 01:52:04.337268  340990 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0229 01:52:04.337275  340990 command_runner.go:130] > storage_option = [
	I0229 01:52:04.337286  340990 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0229 01:52:04.337291  340990 command_runner.go:130] > ]
	I0229 01:52:04.337305  340990 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0229 01:52:04.337318  340990 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0229 01:52:04.337332  340990 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0229 01:52:04.337344  340990 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0229 01:52:04.337355  340990 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0229 01:52:04.337364  340990 command_runner.go:130] > # always happen on a node reboot
	I0229 01:52:04.337375  340990 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0229 01:52:04.337390  340990 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0229 01:52:04.337403  340990 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0229 01:52:04.337413  340990 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0229 01:52:04.337422  340990 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0229 01:52:04.337436  340990 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0229 01:52:04.337451  340990 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0229 01:52:04.337462  340990 command_runner.go:130] > # internal_wipe = true
	I0229 01:52:04.337474  340990 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0229 01:52:04.337482  340990 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0229 01:52:04.337488  340990 command_runner.go:130] > # internal_repair = false
	I0229 01:52:04.337499  340990 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0229 01:52:04.337510  340990 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0229 01:52:04.337522  340990 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0229 01:52:04.337531  340990 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0229 01:52:04.337543  340990 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0229 01:52:04.337548  340990 command_runner.go:130] > [crio.api]
	I0229 01:52:04.337565  340990 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0229 01:52:04.337574  340990 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0229 01:52:04.337588  340990 command_runner.go:130] > # IP address on which the stream server will listen.
	I0229 01:52:04.337599  340990 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0229 01:52:04.337609  340990 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0229 01:52:04.337621  340990 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0229 01:52:04.337627  340990 command_runner.go:130] > # stream_port = "0"
	I0229 01:52:04.337635  340990 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0229 01:52:04.337644  340990 command_runner.go:130] > # stream_enable_tls = false
	I0229 01:52:04.337652  340990 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0229 01:52:04.337660  340990 command_runner.go:130] > # stream_idle_timeout = ""
	I0229 01:52:04.337666  340990 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0229 01:52:04.337679  340990 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0229 01:52:04.337688  340990 command_runner.go:130] > # minutes.
	I0229 01:52:04.337695  340990 command_runner.go:130] > # stream_tls_cert = ""
	I0229 01:52:04.337708  340990 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0229 01:52:04.337717  340990 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0229 01:52:04.337724  340990 command_runner.go:130] > # stream_tls_key = ""
	I0229 01:52:04.337733  340990 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0229 01:52:04.337745  340990 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0229 01:52:04.337766  340990 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0229 01:52:04.337777  340990 command_runner.go:130] > # stream_tls_ca = ""
	I0229 01:52:04.337791  340990 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0229 01:52:04.337801  340990 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0229 01:52:04.337811  340990 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0229 01:52:04.337820  340990 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0229 01:52:04.337832  340990 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0229 01:52:04.337845  340990 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0229 01:52:04.337852  340990 command_runner.go:130] > [crio.runtime]
	I0229 01:52:04.337861  340990 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0229 01:52:04.337878  340990 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0229 01:52:04.337888  340990 command_runner.go:130] > # "nofile=1024:2048"
	I0229 01:52:04.337897  340990 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0229 01:52:04.337903  340990 command_runner.go:130] > # default_ulimits = [
	I0229 01:52:04.337912  340990 command_runner.go:130] > # ]
	I0229 01:52:04.337922  340990 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0229 01:52:04.337933  340990 command_runner.go:130] > # no_pivot = false
	I0229 01:52:04.337945  340990 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0229 01:52:04.337957  340990 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0229 01:52:04.337968  340990 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0229 01:52:04.337977  340990 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0229 01:52:04.337987  340990 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0229 01:52:04.337997  340990 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0229 01:52:04.338007  340990 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0229 01:52:04.338014  340990 command_runner.go:130] > # Cgroup setting for conmon
	I0229 01:52:04.338025  340990 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0229 01:52:04.338034  340990 command_runner.go:130] > conmon_cgroup = "pod"
	I0229 01:52:04.338044  340990 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0229 01:52:04.338055  340990 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0229 01:52:04.338066  340990 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0229 01:52:04.338077  340990 command_runner.go:130] > conmon_env = [
	I0229 01:52:04.338085  340990 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0229 01:52:04.338094  340990 command_runner.go:130] > ]
	I0229 01:52:04.338103  340990 command_runner.go:130] > # Additional environment variables to set for all the
	I0229 01:52:04.338115  340990 command_runner.go:130] > # containers. These are overridden if set in the
	I0229 01:52:04.338124  340990 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0229 01:52:04.338133  340990 command_runner.go:130] > # default_env = [
	I0229 01:52:04.338142  340990 command_runner.go:130] > # ]
	I0229 01:52:04.338154  340990 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0229 01:52:04.338166  340990 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0229 01:52:04.338175  340990 command_runner.go:130] > # selinux = false
	I0229 01:52:04.338185  340990 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0229 01:52:04.338198  340990 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0229 01:52:04.338213  340990 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0229 01:52:04.338222  340990 command_runner.go:130] > # seccomp_profile = ""
	I0229 01:52:04.338243  340990 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0229 01:52:04.338252  340990 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0229 01:52:04.338264  340990 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0229 01:52:04.338273  340990 command_runner.go:130] > # which might increase security.
	I0229 01:52:04.338280  340990 command_runner.go:130] > # This option is currently deprecated,
	I0229 01:52:04.338291  340990 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0229 01:52:04.338302  340990 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0229 01:52:04.338314  340990 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0229 01:52:04.338327  340990 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0229 01:52:04.338339  340990 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0229 01:52:04.338353  340990 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0229 01:52:04.338365  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:52:04.338375  340990 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0229 01:52:04.338384  340990 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0229 01:52:04.338394  340990 command_runner.go:130] > # the cgroup blockio controller.
	I0229 01:52:04.338401  340990 command_runner.go:130] > # blockio_config_file = ""
	I0229 01:52:04.338412  340990 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0229 01:52:04.338421  340990 command_runner.go:130] > # blockio parameters.
	I0229 01:52:04.338428  340990 command_runner.go:130] > # blockio_reload = false
	I0229 01:52:04.338440  340990 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0229 01:52:04.338449  340990 command_runner.go:130] > # irqbalance daemon.
	I0229 01:52:04.338458  340990 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0229 01:52:04.338470  340990 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0229 01:52:04.338483  340990 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0229 01:52:04.338496  340990 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0229 01:52:04.338508  340990 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0229 01:52:04.338521  340990 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0229 01:52:04.338532  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:52:04.338544  340990 command_runner.go:130] > # rdt_config_file = ""
	I0229 01:52:04.338552  340990 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0229 01:52:04.338562  340990 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0229 01:52:04.338587  340990 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0229 01:52:04.338597  340990 command_runner.go:130] > # separate_pull_cgroup = ""
	I0229 01:52:04.338607  340990 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0229 01:52:04.338620  340990 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0229 01:52:04.338628  340990 command_runner.go:130] > # will be added.
	I0229 01:52:04.338634  340990 command_runner.go:130] > # default_capabilities = [
	I0229 01:52:04.338644  340990 command_runner.go:130] > # 	"CHOWN",
	I0229 01:52:04.338650  340990 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0229 01:52:04.338658  340990 command_runner.go:130] > # 	"FSETID",
	I0229 01:52:04.338664  340990 command_runner.go:130] > # 	"FOWNER",
	I0229 01:52:04.338673  340990 command_runner.go:130] > # 	"SETGID",
	I0229 01:52:04.338679  340990 command_runner.go:130] > # 	"SETUID",
	I0229 01:52:04.338688  340990 command_runner.go:130] > # 	"SETPCAP",
	I0229 01:52:04.338696  340990 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0229 01:52:04.338704  340990 command_runner.go:130] > # 	"KILL",
	I0229 01:52:04.338710  340990 command_runner.go:130] > # ]
	I0229 01:52:04.338724  340990 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0229 01:52:04.338738  340990 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0229 01:52:04.338748  340990 command_runner.go:130] > # add_inheritable_capabilities = false
	I0229 01:52:04.338758  340990 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0229 01:52:04.338770  340990 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0229 01:52:04.338779  340990 command_runner.go:130] > # default_sysctls = [
	I0229 01:52:04.338785  340990 command_runner.go:130] > # ]
	I0229 01:52:04.338794  340990 command_runner.go:130] > # List of devices on the host that a
	I0229 01:52:04.338804  340990 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0229 01:52:04.338814  340990 command_runner.go:130] > # allowed_devices = [
	I0229 01:52:04.338821  340990 command_runner.go:130] > # 	"/dev/fuse",
	I0229 01:52:04.338828  340990 command_runner.go:130] > # ]
	I0229 01:52:04.338836  340990 command_runner.go:130] > # List of additional devices. specified as
	I0229 01:52:04.338850  340990 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0229 01:52:04.338858  340990 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0229 01:52:04.338876  340990 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0229 01:52:04.338886  340990 command_runner.go:130] > # additional_devices = [
	I0229 01:52:04.338892  340990 command_runner.go:130] > # ]
	I0229 01:52:04.338903  340990 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0229 01:52:04.338908  340990 command_runner.go:130] > # cdi_spec_dirs = [
	I0229 01:52:04.338917  340990 command_runner.go:130] > # 	"/etc/cdi",
	I0229 01:52:04.338923  340990 command_runner.go:130] > # 	"/var/run/cdi",
	I0229 01:52:04.338932  340990 command_runner.go:130] > # ]
	I0229 01:52:04.338942  340990 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0229 01:52:04.338954  340990 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0229 01:52:04.338961  340990 command_runner.go:130] > # Defaults to false.
	I0229 01:52:04.338972  340990 command_runner.go:130] > # device_ownership_from_security_context = false
	I0229 01:52:04.338985  340990 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0229 01:52:04.338995  340990 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0229 01:52:04.339006  340990 command_runner.go:130] > # hooks_dir = [
	I0229 01:52:04.339016  340990 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0229 01:52:04.339022  340990 command_runner.go:130] > # ]
	I0229 01:52:04.339034  340990 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0229 01:52:04.339049  340990 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0229 01:52:04.339062  340990 command_runner.go:130] > # its default mounts from the following two files:
	I0229 01:52:04.339070  340990 command_runner.go:130] > #
	I0229 01:52:04.339080  340990 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0229 01:52:04.339093  340990 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0229 01:52:04.339106  340990 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0229 01:52:04.339114  340990 command_runner.go:130] > #
	I0229 01:52:04.339124  340990 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0229 01:52:04.339137  340990 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0229 01:52:04.339148  340990 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0229 01:52:04.339159  340990 command_runner.go:130] > #      only add mounts it finds in this file.
	I0229 01:52:04.339167  340990 command_runner.go:130] > #
	I0229 01:52:04.339174  340990 command_runner.go:130] > # default_mounts_file = ""
	I0229 01:52:04.339185  340990 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0229 01:52:04.339199  340990 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0229 01:52:04.339207  340990 command_runner.go:130] > pids_limit = 1024
	I0229 01:52:04.339217  340990 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0229 01:52:04.339229  340990 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0229 01:52:04.339239  340990 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0229 01:52:04.339253  340990 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0229 01:52:04.339263  340990 command_runner.go:130] > # log_size_max = -1
	I0229 01:52:04.339273  340990 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0229 01:52:04.339283  340990 command_runner.go:130] > # log_to_journald = false
	I0229 01:52:04.339291  340990 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0229 01:52:04.339301  340990 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0229 01:52:04.339309  340990 command_runner.go:130] > # Path to directory for container attach sockets.
	I0229 01:52:04.339317  340990 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0229 01:52:04.339325  340990 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0229 01:52:04.339334  340990 command_runner.go:130] > # bind_mount_prefix = ""
	I0229 01:52:04.339343  340990 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0229 01:52:04.339353  340990 command_runner.go:130] > # read_only = false
	I0229 01:52:04.339362  340990 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0229 01:52:04.339375  340990 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0229 01:52:04.339384  340990 command_runner.go:130] > # live configuration reload.
	I0229 01:52:04.339391  340990 command_runner.go:130] > # log_level = "info"
	I0229 01:52:04.339404  340990 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0229 01:52:04.339416  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:52:04.339425  340990 command_runner.go:130] > # log_filter = ""
	I0229 01:52:04.339433  340990 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0229 01:52:04.339446  340990 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0229 01:52:04.339456  340990 command_runner.go:130] > # separated by comma.
	I0229 01:52:04.339468  340990 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 01:52:04.339478  340990 command_runner.go:130] > # uid_mappings = ""
	I0229 01:52:04.339486  340990 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0229 01:52:04.339498  340990 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0229 01:52:04.339506  340990 command_runner.go:130] > # separated by comma.
	I0229 01:52:04.339520  340990 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 01:52:04.339529  340990 command_runner.go:130] > # gid_mappings = ""
	I0229 01:52:04.339540  340990 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0229 01:52:04.339550  340990 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0229 01:52:04.339556  340990 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0229 01:52:04.339565  340990 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 01:52:04.339570  340990 command_runner.go:130] > # minimum_mappable_uid = -1
	I0229 01:52:04.339575  340990 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0229 01:52:04.339583  340990 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0229 01:52:04.339590  340990 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0229 01:52:04.339597  340990 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0229 01:52:04.339603  340990 command_runner.go:130] > # minimum_mappable_gid = -1
	I0229 01:52:04.339609  340990 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0229 01:52:04.339617  340990 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0229 01:52:04.339622  340990 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0229 01:52:04.339629  340990 command_runner.go:130] > # ctr_stop_timeout = 30
	I0229 01:52:04.339634  340990 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0229 01:52:04.339641  340990 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0229 01:52:04.339646  340990 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0229 01:52:04.339651  340990 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0229 01:52:04.339655  340990 command_runner.go:130] > drop_infra_ctr = false
	I0229 01:52:04.339663  340990 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0229 01:52:04.339668  340990 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0229 01:52:04.339678  340990 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0229 01:52:04.339682  340990 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0229 01:52:04.339691  340990 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0229 01:52:04.339699  340990 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0229 01:52:04.339704  340990 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0229 01:52:04.339713  340990 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0229 01:52:04.339717  340990 command_runner.go:130] > # shared_cpuset = ""
	I0229 01:52:04.339723  340990 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0229 01:52:04.339730  340990 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0229 01:52:04.339735  340990 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0229 01:52:04.339741  340990 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0229 01:52:04.339749  340990 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0229 01:52:04.339754  340990 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0229 01:52:04.339759  340990 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0229 01:52:04.339766  340990 command_runner.go:130] > # enable_criu_support = false
	I0229 01:52:04.339771  340990 command_runner.go:130] > # Enable/disable the generation of the container,
	I0229 01:52:04.339779  340990 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0229 01:52:04.339783  340990 command_runner.go:130] > # enable_pod_events = false
	I0229 01:52:04.339789  340990 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0229 01:52:04.339795  340990 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0229 01:52:04.339802  340990 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0229 01:52:04.339806  340990 command_runner.go:130] > # default_runtime = "runc"
	I0229 01:52:04.339811  340990 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0229 01:52:04.339818  340990 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0229 01:52:04.339829  340990 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0229 01:52:04.339836  340990 command_runner.go:130] > # creation as a file is not desired either.
	I0229 01:52:04.339844  340990 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0229 01:52:04.339851  340990 command_runner.go:130] > # the hostname is being managed dynamically.
	I0229 01:52:04.339855  340990 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0229 01:52:04.339860  340990 command_runner.go:130] > # ]
	I0229 01:52:04.339870  340990 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0229 01:52:04.339879  340990 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0229 01:52:04.339885  340990 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0229 01:52:04.339892  340990 command_runner.go:130] > # Each entry in the table should follow the format:
	I0229 01:52:04.339897  340990 command_runner.go:130] > #
	I0229 01:52:04.339904  340990 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0229 01:52:04.339914  340990 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0229 01:52:04.339921  340990 command_runner.go:130] > # runtime_type = "oci"
	I0229 01:52:04.339952  340990 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0229 01:52:04.339963  340990 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0229 01:52:04.339972  340990 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0229 01:52:04.339981  340990 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0229 01:52:04.339990  340990 command_runner.go:130] > # monitor_env = []
	I0229 01:52:04.339998  340990 command_runner.go:130] > # privileged_without_host_devices = false
	I0229 01:52:04.340009  340990 command_runner.go:130] > # allowed_annotations = []
	I0229 01:52:04.340017  340990 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0229 01:52:04.340027  340990 command_runner.go:130] > # Where:
	I0229 01:52:04.340035  340990 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0229 01:52:04.340047  340990 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0229 01:52:04.340059  340990 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0229 01:52:04.340069  340990 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0229 01:52:04.340074  340990 command_runner.go:130] > #   in $PATH.
	I0229 01:52:04.340080  340990 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0229 01:52:04.340085  340990 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0229 01:52:04.340094  340990 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0229 01:52:04.340098  340990 command_runner.go:130] > #   state.
	I0229 01:52:04.340106  340990 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0229 01:52:04.340113  340990 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0229 01:52:04.340119  340990 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0229 01:52:04.340127  340990 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0229 01:52:04.340132  340990 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0229 01:52:04.340141  340990 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0229 01:52:04.340146  340990 command_runner.go:130] > #   The currently recognized values are:
	I0229 01:52:04.340152  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0229 01:52:04.340161  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0229 01:52:04.340167  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0229 01:52:04.340174  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0229 01:52:04.340184  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0229 01:52:04.340190  340990 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0229 01:52:04.340198  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0229 01:52:04.340204  340990 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0229 01:52:04.340212  340990 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0229 01:52:04.340219  340990 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0229 01:52:04.340225  340990 command_runner.go:130] > #   deprecated option "conmon".
	I0229 01:52:04.340234  340990 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0229 01:52:04.340241  340990 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0229 01:52:04.340247  340990 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0229 01:52:04.340255  340990 command_runner.go:130] > #   should be moved to the container's cgroup
	I0229 01:52:04.340262  340990 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0229 01:52:04.340269  340990 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0229 01:52:04.340275  340990 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0229 01:52:04.340283  340990 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0229 01:52:04.340286  340990 command_runner.go:130] > #
	I0229 01:52:04.340293  340990 command_runner.go:130] > # Using the seccomp notifier feature:
	I0229 01:52:04.340296  340990 command_runner.go:130] > #
	I0229 01:52:04.340304  340990 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0229 01:52:04.340310  340990 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0229 01:52:04.340313  340990 command_runner.go:130] > #
	I0229 01:52:04.340319  340990 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0229 01:52:04.340327  340990 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0229 01:52:04.340331  340990 command_runner.go:130] > #
	I0229 01:52:04.340336  340990 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0229 01:52:04.340342  340990 command_runner.go:130] > # feature.
	I0229 01:52:04.340345  340990 command_runner.go:130] > #
	I0229 01:52:04.340351  340990 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0229 01:52:04.340359  340990 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0229 01:52:04.340365  340990 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0229 01:52:04.340372  340990 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0229 01:52:04.340378  340990 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0229 01:52:04.340384  340990 command_runner.go:130] > #
	I0229 01:52:04.340390  340990 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0229 01:52:04.340397  340990 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0229 01:52:04.340400  340990 command_runner.go:130] > #
	I0229 01:52:04.340406  340990 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0229 01:52:04.340413  340990 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0229 01:52:04.340417  340990 command_runner.go:130] > #
	I0229 01:52:04.340422  340990 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0229 01:52:04.340429  340990 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0229 01:52:04.340432  340990 command_runner.go:130] > # limitation.
	I0229 01:52:04.340436  340990 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0229 01:52:04.340441  340990 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0229 01:52:04.340448  340990 command_runner.go:130] > runtime_type = "oci"
	I0229 01:52:04.340452  340990 command_runner.go:130] > runtime_root = "/run/runc"
	I0229 01:52:04.340456  340990 command_runner.go:130] > runtime_config_path = ""
	I0229 01:52:04.340461  340990 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0229 01:52:04.340465  340990 command_runner.go:130] > monitor_cgroup = "pod"
	I0229 01:52:04.340472  340990 command_runner.go:130] > monitor_exec_cgroup = ""
	I0229 01:52:04.340475  340990 command_runner.go:130] > monitor_env = [
	I0229 01:52:04.340480  340990 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0229 01:52:04.340487  340990 command_runner.go:130] > ]
	I0229 01:52:04.340491  340990 command_runner.go:130] > privileged_without_host_devices = false
	I0229 01:52:04.340499  340990 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0229 01:52:04.340505  340990 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0229 01:52:04.340513  340990 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0229 01:52:04.340520  340990 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0229 01:52:04.340529  340990 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0229 01:52:04.340534  340990 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0229 01:52:04.340542  340990 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0229 01:52:04.340552  340990 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0229 01:52:04.340557  340990 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0229 01:52:04.340567  340990 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0229 01:52:04.340571  340990 command_runner.go:130] > # Example:
	I0229 01:52:04.340577  340990 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0229 01:52:04.340582  340990 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0229 01:52:04.340589  340990 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0229 01:52:04.340594  340990 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0229 01:52:04.340600  340990 command_runner.go:130] > # cpuset = 0
	I0229 01:52:04.340603  340990 command_runner.go:130] > # cpushares = "0-1"
	I0229 01:52:04.340610  340990 command_runner.go:130] > # Where:
	I0229 01:52:04.340617  340990 command_runner.go:130] > # The workload name is workload-type.
	I0229 01:52:04.340631  340990 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0229 01:52:04.340640  340990 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0229 01:52:04.340651  340990 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0229 01:52:04.340665  340990 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0229 01:52:04.340675  340990 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0229 01:52:04.340684  340990 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0229 01:52:04.340697  340990 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0229 01:52:04.340707  340990 command_runner.go:130] > # Default value is set to true
	I0229 01:52:04.340714  340990 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0229 01:52:04.340725  340990 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0229 01:52:04.340737  340990 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0229 01:52:04.340744  340990 command_runner.go:130] > # Default value is set to 'false'
	I0229 01:52:04.340754  340990 command_runner.go:130] > # disable_hostport_mapping = false
	I0229 01:52:04.340767  340990 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0229 01:52:04.340775  340990 command_runner.go:130] > #
	I0229 01:52:04.340785  340990 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0229 01:52:04.340797  340990 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0229 01:52:04.340810  340990 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0229 01:52:04.340823  340990 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0229 01:52:04.340833  340990 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0229 01:52:04.340842  340990 command_runner.go:130] > [crio.image]
	I0229 01:52:04.340851  340990 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0229 01:52:04.340860  340990 command_runner.go:130] > # default_transport = "docker://"
	I0229 01:52:04.340876  340990 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0229 01:52:04.340889  340990 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0229 01:52:04.340897  340990 command_runner.go:130] > # global_auth_file = ""
	I0229 01:52:04.340906  340990 command_runner.go:130] > # The image used to instantiate infra containers.
	I0229 01:52:04.340916  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:52:04.340924  340990 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0229 01:52:04.340937  340990 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0229 01:52:04.340950  340990 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0229 01:52:04.340961  340990 command_runner.go:130] > # This option supports live configuration reload.
	I0229 01:52:04.340970  340990 command_runner.go:130] > # pause_image_auth_file = ""
	I0229 01:52:04.340982  340990 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0229 01:52:04.340994  340990 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0229 01:52:04.341006  340990 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0229 01:52:04.341016  340990 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0229 01:52:04.341025  340990 command_runner.go:130] > # pause_command = "/pause"
	I0229 01:52:04.341034  340990 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0229 01:52:04.341046  340990 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0229 01:52:04.341057  340990 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0229 01:52:04.341069  340990 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0229 01:52:04.341087  340990 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0229 01:52:04.341101  340990 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0229 01:52:04.341107  340990 command_runner.go:130] > # pinned_images = [
	I0229 01:52:04.341116  340990 command_runner.go:130] > # ]
	I0229 01:52:04.341126  340990 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0229 01:52:04.341136  340990 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0229 01:52:04.341144  340990 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0229 01:52:04.341150  340990 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0229 01:52:04.341158  340990 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0229 01:52:04.341162  340990 command_runner.go:130] > # signature_policy = ""
	I0229 01:52:04.341169  340990 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0229 01:52:04.341176  340990 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0229 01:52:04.341184  340990 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0229 01:52:04.341190  340990 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0229 01:52:04.341198  340990 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0229 01:52:04.341203  340990 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0229 01:52:04.341209  340990 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0229 01:52:04.341217  340990 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0229 01:52:04.341221  340990 command_runner.go:130] > # changing them here.
	I0229 01:52:04.341225  340990 command_runner.go:130] > # insecure_registries = [
	I0229 01:52:04.341231  340990 command_runner.go:130] > # ]
	I0229 01:52:04.341237  340990 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0229 01:52:04.341244  340990 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0229 01:52:04.341248  340990 command_runner.go:130] > # image_volumes = "mkdir"
	I0229 01:52:04.341255  340990 command_runner.go:130] > # Temporary directory to use for storing big files
	I0229 01:52:04.341259  340990 command_runner.go:130] > # big_files_temporary_dir = ""
	I0229 01:52:04.341267  340990 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0229 01:52:04.341273  340990 command_runner.go:130] > # CNI plugins.
	I0229 01:52:04.341277  340990 command_runner.go:130] > [crio.network]
	I0229 01:52:04.341285  340990 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0229 01:52:04.341291  340990 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0229 01:52:04.341297  340990 command_runner.go:130] > # cni_default_network = ""
	I0229 01:52:04.341303  340990 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0229 01:52:04.341310  340990 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0229 01:52:04.341316  340990 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0229 01:52:04.341322  340990 command_runner.go:130] > # plugin_dirs = [
	I0229 01:52:04.341327  340990 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0229 01:52:04.341333  340990 command_runner.go:130] > # ]
	I0229 01:52:04.341339  340990 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0229 01:52:04.341344  340990 command_runner.go:130] > [crio.metrics]
	I0229 01:52:04.341349  340990 command_runner.go:130] > # Globally enable or disable metrics support.
	I0229 01:52:04.341355  340990 command_runner.go:130] > enable_metrics = true
	I0229 01:52:04.341359  340990 command_runner.go:130] > # Specify enabled metrics collectors.
	I0229 01:52:04.341366  340990 command_runner.go:130] > # Per default all metrics are enabled.
	I0229 01:52:04.341372  340990 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0229 01:52:04.341383  340990 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0229 01:52:04.341388  340990 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0229 01:52:04.341395  340990 command_runner.go:130] > # metrics_collectors = [
	I0229 01:52:04.341399  340990 command_runner.go:130] > # 	"operations",
	I0229 01:52:04.341406  340990 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0229 01:52:04.341410  340990 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0229 01:52:04.341416  340990 command_runner.go:130] > # 	"operations_errors",
	I0229 01:52:04.341420  340990 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0229 01:52:04.341427  340990 command_runner.go:130] > # 	"image_pulls_by_name",
	I0229 01:52:04.341431  340990 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0229 01:52:04.341438  340990 command_runner.go:130] > # 	"image_pulls_failures",
	I0229 01:52:04.341442  340990 command_runner.go:130] > # 	"image_pulls_successes",
	I0229 01:52:04.341448  340990 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0229 01:52:04.341453  340990 command_runner.go:130] > # 	"image_layer_reuse",
	I0229 01:52:04.341461  340990 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0229 01:52:04.341465  340990 command_runner.go:130] > # 	"containers_oom_total",
	I0229 01:52:04.341471  340990 command_runner.go:130] > # 	"containers_oom",
	I0229 01:52:04.341475  340990 command_runner.go:130] > # 	"processes_defunct",
	I0229 01:52:04.341479  340990 command_runner.go:130] > # 	"operations_total",
	I0229 01:52:04.341486  340990 command_runner.go:130] > # 	"operations_latency_seconds",
	I0229 01:52:04.341490  340990 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0229 01:52:04.341496  340990 command_runner.go:130] > # 	"operations_errors_total",
	I0229 01:52:04.341500  340990 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0229 01:52:04.341507  340990 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0229 01:52:04.341511  340990 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0229 01:52:04.341517  340990 command_runner.go:130] > # 	"image_pulls_success_total",
	I0229 01:52:04.341521  340990 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0229 01:52:04.341530  340990 command_runner.go:130] > # 	"containers_oom_count_total",
	I0229 01:52:04.341538  340990 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0229 01:52:04.341542  340990 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0229 01:52:04.341547  340990 command_runner.go:130] > # ]
	I0229 01:52:04.341555  340990 command_runner.go:130] > # The port on which the metrics server will listen.
	I0229 01:52:04.341559  340990 command_runner.go:130] > # metrics_port = 9090
	I0229 01:52:04.341564  340990 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0229 01:52:04.341571  340990 command_runner.go:130] > # metrics_socket = ""
	I0229 01:52:04.341575  340990 command_runner.go:130] > # The certificate for the secure metrics server.
	I0229 01:52:04.341581  340990 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0229 01:52:04.341589  340990 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0229 01:52:04.341594  340990 command_runner.go:130] > # certificate on any modification event.
	I0229 01:52:04.341600  340990 command_runner.go:130] > # metrics_cert = ""
	I0229 01:52:04.341606  340990 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0229 01:52:04.341613  340990 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0229 01:52:04.341619  340990 command_runner.go:130] > # metrics_key = ""
	I0229 01:52:04.341627  340990 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0229 01:52:04.341632  340990 command_runner.go:130] > [crio.tracing]
	I0229 01:52:04.341637  340990 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0229 01:52:04.341643  340990 command_runner.go:130] > # enable_tracing = false
	I0229 01:52:04.341649  340990 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0229 01:52:04.341655  340990 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0229 01:52:04.341661  340990 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0229 01:52:04.341667  340990 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0229 01:52:04.341671  340990 command_runner.go:130] > # CRI-O NRI configuration.
	I0229 01:52:04.341678  340990 command_runner.go:130] > [crio.nri]
	I0229 01:52:04.341682  340990 command_runner.go:130] > # Globally enable or disable NRI.
	I0229 01:52:04.341685  340990 command_runner.go:130] > # enable_nri = false
	I0229 01:52:04.341689  340990 command_runner.go:130] > # NRI socket to listen on.
	I0229 01:52:04.341694  340990 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0229 01:52:04.341699  340990 command_runner.go:130] > # NRI plugin directory to use.
	I0229 01:52:04.341704  340990 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0229 01:52:04.341710  340990 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0229 01:52:04.341715  340990 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0229 01:52:04.341723  340990 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0229 01:52:04.341729  340990 command_runner.go:130] > # nri_disable_connections = false
	I0229 01:52:04.341738  340990 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0229 01:52:04.341743  340990 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0229 01:52:04.341748  340990 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0229 01:52:04.341755  340990 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0229 01:52:04.341760  340990 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0229 01:52:04.341764  340990 command_runner.go:130] > [crio.stats]
	I0229 01:52:04.341769  340990 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0229 01:52:04.341776  340990 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0229 01:52:04.341781  340990 command_runner.go:130] > # stats_collection_period = 0
	I0229 01:52:04.341821  340990 command_runner.go:130] ! time="2024-02-29 01:52:04.319642403Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0229 01:52:04.341834  340990 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0229 01:52:04.341909  340990 cni.go:84] Creating CNI manager for ""
	I0229 01:52:04.341920  340990 cni.go:136] 3 nodes found, recommending kindnet
	I0229 01:52:04.341935  340990 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 01:52:04.341964  340990 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-107035 NodeName:multinode-107035-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 01:52:04.342123  340990 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-107035-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.121
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:52:04.342179  340990 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-107035-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-107035 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:52:04.342249  340990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 01:52:04.353044  340990 command_runner.go:130] > kubeadm
	I0229 01:52:04.353068  340990 command_runner.go:130] > kubectl
	I0229 01:52:04.353073  340990 command_runner.go:130] > kubelet
	I0229 01:52:04.353126  340990 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:52:04.353188  340990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0229 01:52:04.363885  340990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0229 01:52:04.383211  340990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 01:52:04.402995  340990 ssh_runner.go:195] Run: grep 192.168.39.183	control-plane.minikube.internal$ /etc/hosts
	I0229 01:52:04.407607  340990 command_runner.go:130] > 192.168.39.183	control-plane.minikube.internal
	I0229 01:52:04.407745  340990 host.go:66] Checking if "multinode-107035" exists ...
	I0229 01:52:04.407978  340990 config.go:182] Loaded profile config "multinode-107035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:52:04.408133  340990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:52:04.408177  340990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:52:04.423978  340990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
	I0229 01:52:04.424478  340990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:52:04.424949  340990 main.go:141] libmachine: Using API Version  1
	I0229 01:52:04.424973  340990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:52:04.425302  340990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:52:04.425493  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:52:04.425656  340990 start.go:304] JoinCluster: &{Name:multinode-107035 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-107035 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.26 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:52:04.425838  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0229 01:52:04.425872  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:52:04.428929  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:52:04.429297  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:52:04.429316  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:52:04.429493  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:52:04.429685  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:52:04.429845  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:52:04.429961  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035/id_rsa Username:docker}
	I0229 01:52:04.615086  340990 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 1osru3.6x6bpporuo894kt7 --discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 
	I0229 01:52:04.616571  340990 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 01:52:04.616617  340990 host.go:66] Checking if "multinode-107035" exists ...
	I0229 01:52:04.616942  340990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:52:04.617004  340990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:52:04.632911  340990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45891
	I0229 01:52:04.633429  340990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:52:04.633965  340990 main.go:141] libmachine: Using API Version  1
	I0229 01:52:04.633987  340990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:52:04.634363  340990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:52:04.634560  340990 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:52:04.634799  340990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-107035-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0229 01:52:04.634822  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:52:04.637547  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:52:04.637972  340990 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:48:04 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:52:04.638003  340990 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:52:04.638144  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:52:04.638312  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:52:04.638484  340990 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:52:04.638604  340990 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035/id_rsa Username:docker}
	I0229 01:52:04.812145  340990 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0229 01:52:04.861215  340990 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-tqzhh, kube-system/kube-proxy-fhzft
	I0229 01:52:07.885093  340990 command_runner.go:130] > node/multinode-107035-m03 cordoned
	I0229 01:52:07.885133  340990 command_runner.go:130] > pod "busybox-5b5d89c9d6-mwnbb" has DeletionTimestamp older than 1 seconds, skipping
	I0229 01:52:07.885140  340990 command_runner.go:130] > node/multinode-107035-m03 drained
	I0229 01:52:07.885168  340990 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-107035-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.250341485s)
	I0229 01:52:07.885197  340990 node.go:108] successfully drained node "m03"
	I0229 01:52:07.885550  340990 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:52:07.885792  340990 kapi.go:59] client config for multinode-107035: &rest.Config{Host:"https://192.168.39.183:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.key", CAFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 01:52:07.886085  340990 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0229 01:52:07.886131  340990 round_trippers.go:463] DELETE https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m03
	I0229 01:52:07.886139  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:07.886147  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:07.886152  340990 round_trippers.go:473]     Content-Type: application/json
	I0229 01:52:07.886159  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:07.897948  340990 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0229 01:52:07.897978  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:07.897988  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:07.897995  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:07.898001  340990 round_trippers.go:580]     Content-Length: 171
	I0229 01:52:07.898005  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:07 GMT
	I0229 01:52:07.898009  340990 round_trippers.go:580]     Audit-Id: c5192d69-b0bc-4003-bf95-ee65a82d9a81
	I0229 01:52:07.898013  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:07.898020  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:07.898048  340990 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-107035-m03","kind":"nodes","uid":"7068367c-f5dd-4a1d-bba4-904a860289cd"}}
	I0229 01:52:07.898087  340990 node.go:124] successfully deleted node "m03"
	I0229 01:52:07.898100  340990 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 01:52:07.898131  340990 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 01:52:07.898161  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1osru3.6x6bpporuo894kt7 --discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-107035-m03"
	I0229 01:52:07.953663  340990 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 01:52:08.123600  340990 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0229 01:52:08.123637  340990 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0229 01:52:08.195394  340990 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:52:08.196234  340990 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:52:08.196260  340990 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 01:52:08.350614  340990 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0229 01:52:08.872828  340990 command_runner.go:130] > This node has joined the cluster:
	I0229 01:52:08.872865  340990 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0229 01:52:08.872876  340990 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0229 01:52:08.872886  340990 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0229 01:52:08.876665  340990 command_runner.go:130] ! W0229 01:52:07.945080    2318 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0229 01:52:08.876696  340990 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0229 01:52:08.876713  340990 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0229 01:52:08.876726  340990 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0229 01:52:08.876760  340990 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0229 01:52:09.179594  340990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=multinode-107035 minikube.k8s.io/updated_at=2024_02_29T01_52_09_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:52:09.281583  340990 command_runner.go:130] > node/multinode-107035-m02 labeled
	I0229 01:52:09.293995  340990 command_runner.go:130] > node/multinode-107035-m03 labeled
	I0229 01:52:09.295892  340990 start.go:306] JoinCluster complete in 4.870236758s
	I0229 01:52:09.295916  340990 cni.go:84] Creating CNI manager for ""
	I0229 01:52:09.295921  340990 cni.go:136] 3 nodes found, recommending kindnet
	I0229 01:52:09.295980  340990 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 01:52:09.301605  340990 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 01:52:09.301629  340990 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 01:52:09.301636  340990 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 01:52:09.301642  340990 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 01:52:09.301647  340990 command_runner.go:130] > Access: 2024-02-29 01:48:05.172185555 +0000
	I0229 01:52:09.301653  340990 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 01:52:09.301661  340990 command_runner.go:130] > Change: 2024-02-29 01:48:03.809050024 +0000
	I0229 01:52:09.301666  340990 command_runner.go:130] >  Birth: -
	I0229 01:52:09.301792  340990 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 01:52:09.301815  340990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 01:52:09.328336  340990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 01:52:09.641512  340990 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 01:52:09.648827  340990 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 01:52:09.652034  340990 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 01:52:09.664705  340990 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 01:52:09.668027  340990 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:52:09.668341  340990 kapi.go:59] client config for multinode-107035: &rest.Config{Host:"https://192.168.39.183:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.key", CAFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 01:52:09.668729  340990 round_trippers.go:463] GET https://192.168.39.183:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 01:52:09.668745  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:09.668757  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:09.668767  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:09.670955  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:52:09.670970  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:09.670977  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:09.670993  340990 round_trippers.go:580]     Content-Length: 291
	I0229 01:52:09.670997  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:09 GMT
	I0229 01:52:09.671001  340990 round_trippers.go:580]     Audit-Id: 85a12084-8396-4052-9e00-1c7a89e053e1
	I0229 01:52:09.671015  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:09.671019  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:09.671023  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:09.671059  340990 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"886475f9-4800-446f-81db-efbd75717fab","resourceVersion":"838","creationTimestamp":"2024-02-29T01:38:23Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 01:52:09.671172  340990 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-107035" context rescaled to 1 replicas
	I0229 01:52:09.671204  340990 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 01:52:09.672961  340990 out.go:177] * Verifying Kubernetes components...
	I0229 01:52:09.674242  340990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:52:09.691025  340990 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:52:09.691303  340990 kapi.go:59] client config for multinode-107035: &rest.Config{Host:"https://192.168.39.183:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/profiles/multinode-107035/client.key", CAFile:"/home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 01:52:09.691562  340990 node_ready.go:35] waiting up to 6m0s for node "multinode-107035-m03" to be "Ready" ...
	I0229 01:52:09.691629  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m03
	I0229 01:52:09.691637  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:09.691645  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:09.691649  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:09.693912  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:52:09.693930  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:09.693939  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:09.693943  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:09.693949  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:09.693953  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:09.693958  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:09 GMT
	I0229 01:52:09.693962  340990 round_trippers.go:580]     Audit-Id: 89f35965-3359-4ae5-b0c6-49c7e18def5c
	I0229 01:52:09.694055  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035-m03","uid":"42b93f73-92bf-483e-a405-5e74dfe78bf1","resourceVersion":"1166","creationTimestamp":"2024-02-29T01:52:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T01_52_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:52:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0229 01:52:09.694417  340990 node_ready.go:49] node "multinode-107035-m03" has status "Ready":"True"
	I0229 01:52:09.694435  340990 node_ready.go:38] duration metric: took 2.85587ms waiting for node "multinode-107035-m03" to be "Ready" ...
	I0229 01:52:09.694448  340990 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:52:09.694518  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods
	I0229 01:52:09.694530  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:09.694537  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:09.694541  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:09.698409  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:52:09.698431  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:09.698440  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:09.698446  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:09.698451  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:09.698455  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:09 GMT
	I0229 01:52:09.698459  340990 round_trippers.go:580]     Audit-Id: 959f7c71-3885-494c-91dc-aee67a301d09
	I0229 01:52:09.698469  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:09.700189  340990 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1172"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"820","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81706 chars]
	I0229 01:52:09.703695  340990 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:09.703782  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5fqf2
	I0229 01:52:09.703794  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:09.703804  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:09.703814  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:09.706171  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:52:09.706192  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:09.706201  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:09.706205  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:09.706210  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:09.706214  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:09 GMT
	I0229 01:52:09.706217  340990 round_trippers.go:580]     Audit-Id: 1b962c0e-a8de-48a1-9612-8f2a766ceb2c
	I0229 01:52:09.706221  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:09.706468  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5fqf2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2730e330-16ca-4b2d-a5dc-330ff37ab57e","resourceVersion":"820","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"cddfbfdc-1806-452e-a0ed-59800097fcfc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cddfbfdc-1806-452e-a0ed-59800097fcfc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6226 chars]
	I0229 01:52:09.706831  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:52:09.706842  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:09.706848  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:09.706853  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:09.708905  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:52:09.708920  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:09.708929  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:09.708935  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:09.708940  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:09.708944  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:09 GMT
	I0229 01:52:09.708951  340990 round_trippers.go:580]     Audit-Id: dbfd5817-2d08-47d5-8b69-7238b20ad6c0
	I0229 01:52:09.708954  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:09.709162  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 01:52:09.709570  340990 pod_ready.go:92] pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace has status "Ready":"True"
	I0229 01:52:09.709595  340990 pod_ready.go:81] duration metric: took 5.872922ms waiting for pod "coredns-5dd5756b68-5fqf2" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:09.709607  340990 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:09.709668  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107035
	I0229 01:52:09.709677  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:09.709687  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:09.709695  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:09.712262  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:52:09.712277  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:09.712283  340990 round_trippers.go:580]     Audit-Id: a5baf508-56c5-480b-9d1f-ba7cef117eb0
	I0229 01:52:09.712286  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:09.712289  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:09.712294  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:09.712298  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:09.712302  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:09 GMT
	I0229 01:52:09.712636  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107035","namespace":"kube-system","uid":"65255c97-af0a-4233-b308-e46dfd75a9f9","resourceVersion":"841","creationTimestamp":"2024-02-29T01:38:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.183:2379","kubernetes.io/config.hash":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.mirror":"538e47eff06230d1aef45a2db671ce73","kubernetes.io/config.seen":"2024-02-29T01:38:16.621157329Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5825 chars]
	I0229 01:52:09.712952  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:52:09.712964  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:09.712971  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:09.712974  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:09.714674  340990 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 01:52:09.714684  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:09.714689  340990 round_trippers.go:580]     Audit-Id: 0471b686-7f48-41fc-9d6b-e4b37615c59d
	I0229 01:52:09.714693  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:09.714700  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:09.714705  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:09.714709  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:09.714713  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:09 GMT
	I0229 01:52:09.714947  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 01:52:09.715208  340990 pod_ready.go:92] pod "etcd-multinode-107035" in "kube-system" namespace has status "Ready":"True"
	I0229 01:52:09.715223  340990 pod_ready.go:81] duration metric: took 5.61005ms waiting for pod "etcd-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:09.715238  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:09.715277  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-107035
	I0229 01:52:09.715284  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:09.715290  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:09.715296  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:09.717163  340990 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 01:52:09.717179  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:09.717188  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:09.717193  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:09.717198  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:09.717203  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:09.717207  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:09 GMT
	I0229 01:52:09.717210  340990 round_trippers.go:580]     Audit-Id: dcd12e3c-c218-43d8-8fb7-48a774506209
	I0229 01:52:09.717342  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-107035","namespace":"kube-system","uid":"c8a5ad6e-c2cc-49a4-8837-ba1b280f87af","resourceVersion":"839","creationTimestamp":"2024-02-29T01:38:23Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.183:8443","kubernetes.io/config.hash":"f8e3f19840dda0faee1ad3a91ae482c1","kubernetes.io/config.mirror":"f8e3f19840dda0faee1ad3a91ae482c1","kubernetes.io/config.seen":"2024-02-29T01:38:16.621158531Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7351 chars]
	I0229 01:52:09.717662  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:52:09.717672  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:09.717679  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:09.717684  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:09.720886  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:52:09.720899  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:09.720905  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:09 GMT
	I0229 01:52:09.720909  340990 round_trippers.go:580]     Audit-Id: e973de91-3968-4505-a2a4-497626f41382
	I0229 01:52:09.720913  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:09.720916  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:09.720918  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:09.720921  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:09.721668  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 01:52:09.721937  340990 pod_ready.go:92] pod "kube-apiserver-multinode-107035" in "kube-system" namespace has status "Ready":"True"
	I0229 01:52:09.721954  340990 pod_ready.go:81] duration metric: took 6.708512ms waiting for pod "kube-apiserver-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:09.721964  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:09.722004  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-107035
	I0229 01:52:09.722013  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:09.722019  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:09.722023  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:09.724085  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:52:09.724099  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:09.724105  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:09.724108  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:09.724110  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:09 GMT
	I0229 01:52:09.724112  340990 round_trippers.go:580]     Audit-Id: 62e5abf4-6ace-4beb-9abd-bad1c1f9d306
	I0229 01:52:09.724114  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:09.724117  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:09.724447  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-107035","namespace":"kube-system","uid":"cc34d9e0-d4bd-4fac-8c94-6ead8a744abc","resourceVersion":"834","creationTimestamp":"2024-02-29T01:38:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d885436ac2f1544135b29b38fb6816fc","kubernetes.io/config.mirror":"d885436ac2f1544135b29b38fb6816fc","kubernetes.io/config.seen":"2024-02-29T01:38:23.684826383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6907 chars]
	I0229 01:52:09.724769  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:52:09.724781  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:09.724787  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:09.724792  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:09.726583  340990 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 01:52:09.726598  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:09.726604  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:09.726608  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:09.726613  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:09 GMT
	I0229 01:52:09.726617  340990 round_trippers.go:580]     Audit-Id: 61d144c6-7eae-4fe2-9dee-6e601823efdb
	I0229 01:52:09.726619  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:09.726622  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:09.726783  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 01:52:09.727049  340990 pod_ready.go:92] pod "kube-controller-manager-multinode-107035" in "kube-system" namespace has status "Ready":"True"
	I0229 01:52:09.727062  340990 pod_ready.go:81] duration metric: took 5.093154ms waiting for pod "kube-controller-manager-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:09.727071  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2vt7v" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:09.892479  340990 request.go:629] Waited for 165.320183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vt7v
	I0229 01:52:09.892608  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vt7v
	I0229 01:52:09.892621  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:09.892634  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:09.892645  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:09.895575  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:52:09.895601  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:09.895611  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:09.895618  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:09 GMT
	I0229 01:52:09.895623  340990 round_trippers.go:580]     Audit-Id: 4d19d87a-bfda-4e52-a0f0-29c7d7c9ca18
	I0229 01:52:09.895628  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:09.895633  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:09.895661  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:09.896139  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2vt7v","generateName":"kube-proxy-","namespace":"kube-system","uid":"eaa78334-8191-47e9-b001-343c90a87460","resourceVersion":"1001","creationTimestamp":"2024-02-29T01:39:16Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02144a11-c41b-4c40-be0e-44f538bad496","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:39:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02144a11-c41b-4c40-be0e-44f538bad496\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5693 chars]
	I0229 01:52:10.092087  340990 request.go:629] Waited for 195.407751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m02
	I0229 01:52:10.092176  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m02
	I0229 01:52:10.092194  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:10.092206  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:10.092215  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:10.095098  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:52:10.095123  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:10.095131  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:10.095137  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:10.095142  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:10.095148  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:10.095153  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:10 GMT
	I0229 01:52:10.095158  340990 round_trippers.go:580]     Audit-Id: 66e6c923-73db-4aee-81ef-da7c03d0f960
	I0229 01:52:10.095346  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035-m02","uid":"141aef2e-52e8-4d4b-87d6-36291e7a5ea8","resourceVersion":"1165","creationTimestamp":"2024-02-29T01:50:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T01_52_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:50:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0229 01:52:10.095729  340990 pod_ready.go:92] pod "kube-proxy-2vt7v" in "kube-system" namespace has status "Ready":"True"
	I0229 01:52:10.095753  340990 pod_ready.go:81] duration metric: took 368.675402ms waiting for pod "kube-proxy-2vt7v" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:10.095766  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7vhtd" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:10.292774  340990 request.go:629] Waited for 196.885135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7vhtd
	I0229 01:52:10.292857  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7vhtd
	I0229 01:52:10.292870  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:10.292881  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:10.292889  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:10.295435  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:52:10.295455  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:10.295461  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:10.295465  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:10.295468  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:10.295471  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:10 GMT
	I0229 01:52:10.295473  340990 round_trippers.go:580]     Audit-Id: 39a5a7c3-5285-4226-b25d-7494b45d2c50
	I0229 01:52:10.295477  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:10.295797  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7vhtd","generateName":"kube-proxy-","namespace":"kube-system","uid":"1a552ea7-1d99-46ec-99e1-30ad4ac72ca8","resourceVersion":"775","creationTimestamp":"2024-02-29T01:38:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02144a11-c41b-4c40-be0e-44f538bad496","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02144a11-c41b-4c40-be0e-44f538bad496\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5484 chars]
	I0229 01:52:10.492680  340990 request.go:629] Waited for 196.378611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:52:10.492762  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:52:10.492768  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:10.492775  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:10.492779  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:10.496495  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:52:10.496514  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:10.496521  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:10.496526  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:10.496530  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:10.496532  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:10.496535  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:10 GMT
	I0229 01:52:10.496537  340990 round_trippers.go:580]     Audit-Id: 82193a72-fa39-4015-afb3-aae97ad53fe5
	I0229 01:52:10.497689  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 01:52:10.498091  340990 pod_ready.go:92] pod "kube-proxy-7vhtd" in "kube-system" namespace has status "Ready":"True"
	I0229 01:52:10.498112  340990 pod_ready.go:81] duration metric: took 402.332372ms waiting for pod "kube-proxy-7vhtd" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:10.498127  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fhzft" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:10.692554  340990 request.go:629] Waited for 194.337813ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fhzft
	I0229 01:52:10.692636  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fhzft
	I0229 01:52:10.692647  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:10.692655  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:10.692663  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:10.696336  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:52:10.696364  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:10.696374  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:10.696379  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:10.696383  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:10.696387  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:10.696391  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:10 GMT
	I0229 01:52:10.696395  340990 round_trippers.go:580]     Audit-Id: d32a72b2-af28-4f5c-a5f9-318c0e8820fd
	I0229 01:52:10.696704  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fhzft","generateName":"kube-proxy-","namespace":"kube-system","uid":"3b05cd87-92a9-4c59-879a-d42c3a08c7d4","resourceVersion":"1184","creationTimestamp":"2024-02-29T01:40:04Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"02144a11-c41b-4c40-be0e-44f538bad496","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:40:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02144a11-c41b-4c40-be0e-44f538bad496\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5697 chars]
	I0229 01:52:10.892611  340990 request.go:629] Waited for 195.378679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m03
	I0229 01:52:10.892720  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035-m03
	I0229 01:52:10.892734  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:10.892744  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:10.892751  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:10.895518  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:52:10.895540  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:10.895547  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:10 GMT
	I0229 01:52:10.895550  340990 round_trippers.go:580]     Audit-Id: 848cd9fb-b294-44b4-8a58-84545b262c9e
	I0229 01:52:10.895553  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:10.895557  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:10.895560  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:10.895567  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:10.895974  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035-m03","uid":"42b93f73-92bf-483e-a405-5e74dfe78bf1","resourceVersion":"1166","creationTimestamp":"2024-02-29T01:52:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T01_52_09_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:52:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0229 01:52:10.896351  340990 pod_ready.go:92] pod "kube-proxy-fhzft" in "kube-system" namespace has status "Ready":"True"
	I0229 01:52:10.896372  340990 pod_ready.go:81] duration metric: took 398.237589ms waiting for pod "kube-proxy-fhzft" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:10.896385  340990 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:11.092490  340990 request.go:629] Waited for 196.018027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107035
	I0229 01:52:11.092566  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107035
	I0229 01:52:11.092571  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:11.092580  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:11.092583  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:11.095972  340990 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 01:52:11.095997  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:11.096005  340990 round_trippers.go:580]     Audit-Id: bb3fe811-5d61-49cb-b095-01ffd27dd56a
	I0229 01:52:11.096009  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:11.096013  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:11.096019  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:11.096024  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:11.096028  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:11 GMT
	I0229 01:52:11.096240  340990 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-107035","namespace":"kube-system","uid":"ac9bc04a-dac0-40f5-b928-4cacd028df82","resourceVersion":"840","creationTimestamp":"2024-02-29T01:38:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ef2538a195901383d6f1be68d27ee2ba","kubernetes.io/config.mirror":"ef2538a195901383d6f1be68d27ee2ba","kubernetes.io/config.seen":"2024-02-29T01:38:23.684827179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T01:38:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4646 chars]
	I0229 01:52:11.292121  340990 request.go:629] Waited for 195.37311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:52:11.292198  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes/multinode-107035
	I0229 01:52:11.292206  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:11.292217  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:11.292225  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:11.295168  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:52:11.295197  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:11.295207  340990 round_trippers.go:580]     Audit-Id: c5d0587b-5274-46b8-9b76-137d40d34f7d
	I0229 01:52:11.295213  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:11.295220  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:11.295224  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:11.295228  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:11.295232  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:11 GMT
	I0229 01:52:11.295413  340990 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-29T01:38:20Z","fieldsType":"FieldsV1","fiel [truncated 6496 chars]
	I0229 01:52:11.295797  340990 pod_ready.go:92] pod "kube-scheduler-multinode-107035" in "kube-system" namespace has status "Ready":"True"
	I0229 01:52:11.295813  340990 pod_ready.go:81] duration metric: took 399.420588ms waiting for pod "kube-scheduler-multinode-107035" in "kube-system" namespace to be "Ready" ...
	I0229 01:52:11.295824  340990 pod_ready.go:38] duration metric: took 1.601367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:52:11.295849  340990 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 01:52:11.295902  340990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:52:11.314605  340990 system_svc.go:56] duration metric: took 18.75108ms WaitForService to wait for kubelet.
	I0229 01:52:11.314644  340990 kubeadm.go:581] duration metric: took 1.643413131s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 01:52:11.314677  340990 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:52:11.492139  340990 request.go:629] Waited for 177.357277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.183:8443/api/v1/nodes
	I0229 01:52:11.492218  340990 round_trippers.go:463] GET https://192.168.39.183:8443/api/v1/nodes
	I0229 01:52:11.492225  340990 round_trippers.go:469] Request Headers:
	I0229 01:52:11.492239  340990 round_trippers.go:473]     Accept: application/json, */*
	I0229 01:52:11.492250  340990 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0229 01:52:11.495110  340990 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 01:52:11.495138  340990 round_trippers.go:577] Response Headers:
	I0229 01:52:11.495149  340990 round_trippers.go:580]     Audit-Id: 5287e01e-0e59-4019-9b36-1ba1e858df95
	I0229 01:52:11.495155  340990 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 01:52:11.495161  340990 round_trippers.go:580]     Content-Type: application/json
	I0229 01:52:11.495166  340990 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26805756-868f-4ddc-9b78-0debbfbf57c7
	I0229 01:52:11.495171  340990 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a4912b96-fad7-4c91-a581-8b040e325fad
	I0229 01:52:11.495176  340990 round_trippers.go:580]     Date: Thu, 29 Feb 2024 01:52:11 GMT
	I0229 01:52:11.495423  340990 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1187"},"items":[{"metadata":{"name":"multinode-107035","uid":"3b1c251c-3797-4f43-accc-6041d87d0cdb","resourceVersion":"849","creationTimestamp":"2024-02-29T01:38:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107035","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f83faa3abac33ca85ff15afa19006ad0a2554d61","minikube.k8s.io/name":"multinode-107035","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T01_38_24_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16521 chars]
	I0229 01:52:11.496021  340990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:52:11.496040  340990 node_conditions.go:123] node cpu capacity is 2
	I0229 01:52:11.496052  340990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:52:11.496056  340990 node_conditions.go:123] node cpu capacity is 2
	I0229 01:52:11.496059  340990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:52:11.496062  340990 node_conditions.go:123] node cpu capacity is 2
	I0229 01:52:11.496066  340990 node_conditions.go:105] duration metric: took 181.38438ms to run NodePressure ...
	I0229 01:52:11.496081  340990 start.go:228] waiting for startup goroutines ...
	I0229 01:52:11.496103  340990 start.go:242] writing updated cluster config ...
	I0229 01:52:11.496386  340990 ssh_runner.go:195] Run: rm -f paused
	I0229 01:52:11.548757  340990 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 01:52:11.551179  340990 out.go:177] * Done! kubectl is now configured to use "multinode-107035" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.657032940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709171532657008470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8142851-0bac-40b7-9669-d92d031f9380 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.657632944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4e9e42c-575a-48a6-bb99-ecfa9c96728a name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.657717730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4e9e42c-575a-48a6-bb99-ecfa9c96728a name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.657928889Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d6786225922ec4166b65122313c474e207b258a22fa878cbf6efe34bea92b40,PodSandboxId:722958adcfabcc8933d401cc97824b6086b6a129652f4047591b935b32923d2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709171346411731590,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d7986-be05-4caf-bec9-ef577b473d77,},Annotations:map[string]string{io.kubernetes.container.hash: 64080283,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3d39c06fd6c8427a6199d19311af458e1bee718c2de782e676123ad599c865,PodSandboxId:ad23722755bc0719accb2f6321eea4fe3471de633fee9b16f52c09b46e566bea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709171325840639733,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dpkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 209e9e3f-1414-4989-94ea-5e41052c8293,},Annotations:map[string]string{io.kubernetes.container.hash: 78b65a22,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a0cc822409bee02b0d0bc6362b3a04f348078067d48a8dbc68632856cedbc8,PodSandboxId:f095c423c0019c7ab5927e1f92a1714a78e6b853d63389bb39073fdbe41193de,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709171323146739470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5fqf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2730e330-16ca-4b2d-a5dc-330ff37ab57e,},Annotations:map[string]string{io.kubernetes.container.hash: 467be881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\
"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9033efbb92c61b305b10851e975dbcc3fdf359ff85fb7ebf99e2136498d20a5b,PodSandboxId:612ad708ebdd8a4f7526f54d459f2aee02eff2faa9f8cddd815038cef463725b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709171319607673575,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hfz2n,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 3ba1ea9a-17be-421b-b430-21e867586927,},Annotations:map[string]string{io.kubernetes.container.hash: b4427474,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd03c744053e9873ccea3addf83090568b6bd919a19470eea31df2083d76d9,PodSandboxId:2fd7c127c4a4122901ccfc15417e80c1fd3f1f3d2b3d7efb22203018558ac4b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709171315799414382,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7vhtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a552
ea7-1d99-46ec-99e1-30ad4ac72ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cf1745dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d647b251f0c07c4e910afc030ac987f074092a1860a29c742372dcdecb4df7,PodSandboxId:722958adcfabcc8933d401cc97824b6086b6a129652f4047591b935b32923d2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709171315612340115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d7986-be05-4caf
-bec9-ef577b473d77,},Annotations:map[string]string{io.kubernetes.container.hash: 64080283,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99df038f5fec0f6148a678458f048a257aea24620a2edc83f782d7c20809163c,PodSandboxId:ba313dfabc81867fcd99b25d5e861eec705604be1cdaca6e0ce3cdc265923a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709171310885254032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2538a195901383d6f1be68d27e
e2ba,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59b18977693bcb7f0a9f8322cd0669191d537a1b92b2f94e0055d5b98227ea66,PodSandboxId:18222dbd322ce676c6d5de066c8d66b41f04d14f3dd71ffb5bf4ab21371d5ee4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709171310869969389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538e47eff06230d1aef45a2db671ce73,},Annotations:map[string]string{io.kube
rnetes.container.hash: 283257ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbff9603adb04b7a2dee1ddf10a06c56acf33459acb6bcaaecc9f0c8b8cf4d0,PodSandboxId:d6cdfe6f0e7b04eea2144112b666240574d8f30686a958e96ef130fb054edff8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709171310781919603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d885436ac2f1544135b29b38fb6816fc,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf42f4e958a034fd410ebf308cde5b945317bfaa80f11e963499fedc95be5c7e,PodSandboxId:526e024367c7987bd979a33b530650b6521f82d862cbd9257ccf37d97edef968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709171310754038523,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8e3f19840dda0faee1ad3a91ae482c1,},Annotations:map[string]string{io.kuber
netes.container.hash: 66677af6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4e9e42c-575a-48a6-bb99-ecfa9c96728a name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.699304671Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84fbad05-8da1-446b-968b-3ea5c44e987a name=/runtime.v1.RuntimeService/Version
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.699415438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84fbad05-8da1-446b-968b-3ea5c44e987a name=/runtime.v1.RuntimeService/Version
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.700874721Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20ab0b06-a711-4d85-a327-ff4afa8627aa name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.701634349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709171532701609098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20ab0b06-a711-4d85-a327-ff4afa8627aa name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.702370783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87471121-0d74-40c8-a386-f09715b37317 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.702432123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87471121-0d74-40c8-a386-f09715b37317 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.702866142Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d6786225922ec4166b65122313c474e207b258a22fa878cbf6efe34bea92b40,PodSandboxId:722958adcfabcc8933d401cc97824b6086b6a129652f4047591b935b32923d2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709171346411731590,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d7986-be05-4caf-bec9-ef577b473d77,},Annotations:map[string]string{io.kubernetes.container.hash: 64080283,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3d39c06fd6c8427a6199d19311af458e1bee718c2de782e676123ad599c865,PodSandboxId:ad23722755bc0719accb2f6321eea4fe3471de633fee9b16f52c09b46e566bea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709171325840639733,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dpkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 209e9e3f-1414-4989-94ea-5e41052c8293,},Annotations:map[string]string{io.kubernetes.container.hash: 78b65a22,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a0cc822409bee02b0d0bc6362b3a04f348078067d48a8dbc68632856cedbc8,PodSandboxId:f095c423c0019c7ab5927e1f92a1714a78e6b853d63389bb39073fdbe41193de,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709171323146739470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5fqf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2730e330-16ca-4b2d-a5dc-330ff37ab57e,},Annotations:map[string]string{io.kubernetes.container.hash: 467be881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\
"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9033efbb92c61b305b10851e975dbcc3fdf359ff85fb7ebf99e2136498d20a5b,PodSandboxId:612ad708ebdd8a4f7526f54d459f2aee02eff2faa9f8cddd815038cef463725b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709171319607673575,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hfz2n,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 3ba1ea9a-17be-421b-b430-21e867586927,},Annotations:map[string]string{io.kubernetes.container.hash: b4427474,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd03c744053e9873ccea3addf83090568b6bd919a19470eea31df2083d76d9,PodSandboxId:2fd7c127c4a4122901ccfc15417e80c1fd3f1f3d2b3d7efb22203018558ac4b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709171315799414382,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7vhtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a552
ea7-1d99-46ec-99e1-30ad4ac72ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cf1745dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d647b251f0c07c4e910afc030ac987f074092a1860a29c742372dcdecb4df7,PodSandboxId:722958adcfabcc8933d401cc97824b6086b6a129652f4047591b935b32923d2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709171315612340115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d7986-be05-4caf
-bec9-ef577b473d77,},Annotations:map[string]string{io.kubernetes.container.hash: 64080283,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99df038f5fec0f6148a678458f048a257aea24620a2edc83f782d7c20809163c,PodSandboxId:ba313dfabc81867fcd99b25d5e861eec705604be1cdaca6e0ce3cdc265923a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709171310885254032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2538a195901383d6f1be68d27e
e2ba,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59b18977693bcb7f0a9f8322cd0669191d537a1b92b2f94e0055d5b98227ea66,PodSandboxId:18222dbd322ce676c6d5de066c8d66b41f04d14f3dd71ffb5bf4ab21371d5ee4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709171310869969389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538e47eff06230d1aef45a2db671ce73,},Annotations:map[string]string{io.kube
rnetes.container.hash: 283257ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbff9603adb04b7a2dee1ddf10a06c56acf33459acb6bcaaecc9f0c8b8cf4d0,PodSandboxId:d6cdfe6f0e7b04eea2144112b666240574d8f30686a958e96ef130fb054edff8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709171310781919603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d885436ac2f1544135b29b38fb6816fc,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf42f4e958a034fd410ebf308cde5b945317bfaa80f11e963499fedc95be5c7e,PodSandboxId:526e024367c7987bd979a33b530650b6521f82d862cbd9257ccf37d97edef968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709171310754038523,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8e3f19840dda0faee1ad3a91ae482c1,},Annotations:map[string]string{io.kuber
netes.container.hash: 66677af6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87471121-0d74-40c8-a386-f09715b37317 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.748288939Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad20f606-087a-4081-ae92-93f376bb3f32 name=/runtime.v1.RuntimeService/Version
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.748381898Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad20f606-087a-4081-ae92-93f376bb3f32 name=/runtime.v1.RuntimeService/Version
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.749638692Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=793d842b-7337-4252-b0a7-9474b880d09c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.750340219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709171532750310304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=793d842b-7337-4252-b0a7-9474b880d09c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.751178945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81b7961c-6313-4b71-9abf-c80f26a397f6 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.751239568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81b7961c-6313-4b71-9abf-c80f26a397f6 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.751781498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d6786225922ec4166b65122313c474e207b258a22fa878cbf6efe34bea92b40,PodSandboxId:722958adcfabcc8933d401cc97824b6086b6a129652f4047591b935b32923d2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709171346411731590,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d7986-be05-4caf-bec9-ef577b473d77,},Annotations:map[string]string{io.kubernetes.container.hash: 64080283,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3d39c06fd6c8427a6199d19311af458e1bee718c2de782e676123ad599c865,PodSandboxId:ad23722755bc0719accb2f6321eea4fe3471de633fee9b16f52c09b46e566bea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709171325840639733,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dpkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 209e9e3f-1414-4989-94ea-5e41052c8293,},Annotations:map[string]string{io.kubernetes.container.hash: 78b65a22,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a0cc822409bee02b0d0bc6362b3a04f348078067d48a8dbc68632856cedbc8,PodSandboxId:f095c423c0019c7ab5927e1f92a1714a78e6b853d63389bb39073fdbe41193de,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709171323146739470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5fqf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2730e330-16ca-4b2d-a5dc-330ff37ab57e,},Annotations:map[string]string{io.kubernetes.container.hash: 467be881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\
"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9033efbb92c61b305b10851e975dbcc3fdf359ff85fb7ebf99e2136498d20a5b,PodSandboxId:612ad708ebdd8a4f7526f54d459f2aee02eff2faa9f8cddd815038cef463725b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709171319607673575,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hfz2n,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 3ba1ea9a-17be-421b-b430-21e867586927,},Annotations:map[string]string{io.kubernetes.container.hash: b4427474,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd03c744053e9873ccea3addf83090568b6bd919a19470eea31df2083d76d9,PodSandboxId:2fd7c127c4a4122901ccfc15417e80c1fd3f1f3d2b3d7efb22203018558ac4b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709171315799414382,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7vhtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a552
ea7-1d99-46ec-99e1-30ad4ac72ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cf1745dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d647b251f0c07c4e910afc030ac987f074092a1860a29c742372dcdecb4df7,PodSandboxId:722958adcfabcc8933d401cc97824b6086b6a129652f4047591b935b32923d2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709171315612340115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d7986-be05-4caf
-bec9-ef577b473d77,},Annotations:map[string]string{io.kubernetes.container.hash: 64080283,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99df038f5fec0f6148a678458f048a257aea24620a2edc83f782d7c20809163c,PodSandboxId:ba313dfabc81867fcd99b25d5e861eec705604be1cdaca6e0ce3cdc265923a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709171310885254032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2538a195901383d6f1be68d27e
e2ba,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59b18977693bcb7f0a9f8322cd0669191d537a1b92b2f94e0055d5b98227ea66,PodSandboxId:18222dbd322ce676c6d5de066c8d66b41f04d14f3dd71ffb5bf4ab21371d5ee4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709171310869969389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538e47eff06230d1aef45a2db671ce73,},Annotations:map[string]string{io.kube
rnetes.container.hash: 283257ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbff9603adb04b7a2dee1ddf10a06c56acf33459acb6bcaaecc9f0c8b8cf4d0,PodSandboxId:d6cdfe6f0e7b04eea2144112b666240574d8f30686a958e96ef130fb054edff8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709171310781919603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d885436ac2f1544135b29b38fb6816fc,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf42f4e958a034fd410ebf308cde5b945317bfaa80f11e963499fedc95be5c7e,PodSandboxId:526e024367c7987bd979a33b530650b6521f82d862cbd9257ccf37d97edef968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709171310754038523,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8e3f19840dda0faee1ad3a91ae482c1,},Annotations:map[string]string{io.kuber
netes.container.hash: 66677af6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81b7961c-6313-4b71-9abf-c80f26a397f6 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.797599257Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae507824-1872-4203-b755-683e1e571c44 name=/runtime.v1.RuntimeService/Version
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.797684063Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae507824-1872-4203-b755-683e1e571c44 name=/runtime.v1.RuntimeService/Version
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.799229824Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be3a0098-a1b4-448f-90fb-b3a69f67cde2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.799804480Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709171532799777381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be3a0098-a1b4-448f-90fb-b3a69f67cde2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.800327316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95d68959-105d-4c29-987d-ad000cb55225 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.800379184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95d68959-105d-4c29-987d-ad000cb55225 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 01:52:12 multinode-107035 crio[669]: time="2024-02-29 01:52:12.800722294Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3d6786225922ec4166b65122313c474e207b258a22fa878cbf6efe34bea92b40,PodSandboxId:722958adcfabcc8933d401cc97824b6086b6a129652f4047591b935b32923d2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709171346411731590,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d7986-be05-4caf-bec9-ef577b473d77,},Annotations:map[string]string{io.kubernetes.container.hash: 64080283,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3d39c06fd6c8427a6199d19311af458e1bee718c2de782e676123ad599c865,PodSandboxId:ad23722755bc0719accb2f6321eea4fe3471de633fee9b16f52c09b46e566bea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1709171325840639733,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-dpkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 209e9e3f-1414-4989-94ea-5e41052c8293,},Annotations:map[string]string{io.kubernetes.container.hash: 78b65a22,io.kubernetes.container.restartCount: 1,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34a0cc822409bee02b0d0bc6362b3a04f348078067d48a8dbc68632856cedbc8,PodSandboxId:f095c423c0019c7ab5927e1f92a1714a78e6b853d63389bb39073fdbe41193de,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709171323146739470,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5fqf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2730e330-16ca-4b2d-a5dc-330ff37ab57e,},Annotations:map[string]string{io.kubernetes.container.hash: 467be881,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\
"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9033efbb92c61b305b10851e975dbcc3fdf359ff85fb7ebf99e2136498d20a5b,PodSandboxId:612ad708ebdd8a4f7526f54d459f2aee02eff2faa9f8cddd815038cef463725b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1709171319607673575,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hfz2n,io.kubernetes.pod.namespace: kube-sy
stem,io.kubernetes.pod.uid: 3ba1ea9a-17be-421b-b430-21e867586927,},Annotations:map[string]string{io.kubernetes.container.hash: b4427474,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbcd03c744053e9873ccea3addf83090568b6bd919a19470eea31df2083d76d9,PodSandboxId:2fd7c127c4a4122901ccfc15417e80c1fd3f1f3d2b3d7efb22203018558ac4b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709171315799414382,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7vhtd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a552
ea7-1d99-46ec-99e1-30ad4ac72ca8,},Annotations:map[string]string{io.kubernetes.container.hash: cf1745dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3d647b251f0c07c4e910afc030ac987f074092a1860a29c742372dcdecb4df7,PodSandboxId:722958adcfabcc8933d401cc97824b6086b6a129652f4047591b935b32923d2e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709171315612340115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d7986-be05-4caf
-bec9-ef577b473d77,},Annotations:map[string]string{io.kubernetes.container.hash: 64080283,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99df038f5fec0f6148a678458f048a257aea24620a2edc83f782d7c20809163c,PodSandboxId:ba313dfabc81867fcd99b25d5e861eec705604be1cdaca6e0ce3cdc265923a8d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709171310885254032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef2538a195901383d6f1be68d27e
e2ba,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59b18977693bcb7f0a9f8322cd0669191d537a1b92b2f94e0055d5b98227ea66,PodSandboxId:18222dbd322ce676c6d5de066c8d66b41f04d14f3dd71ffb5bf4ab21371d5ee4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709171310869969389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 538e47eff06230d1aef45a2db671ce73,},Annotations:map[string]string{io.kube
rnetes.container.hash: 283257ef,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbff9603adb04b7a2dee1ddf10a06c56acf33459acb6bcaaecc9f0c8b8cf4d0,PodSandboxId:d6cdfe6f0e7b04eea2144112b666240574d8f30686a958e96ef130fb054edff8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709171310781919603,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d885436ac2f1544135b29b38fb6816fc,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf42f4e958a034fd410ebf308cde5b945317bfaa80f11e963499fedc95be5c7e,PodSandboxId:526e024367c7987bd979a33b530650b6521f82d862cbd9257ccf37d97edef968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709171310754038523,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-107035,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8e3f19840dda0faee1ad3a91ae482c1,},Annotations:map[string]string{io.kuber
netes.container.hash: 66677af6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95d68959-105d-4c29-987d-ad000cb55225 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3d6786225922e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   722958adcfabc       storage-provisioner
	ee3d39c06fd6c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   ad23722755bc0       busybox-5b5d89c9d6-dpkx5
	34a0cc822409b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   f095c423c0019       coredns-5dd5756b68-5fqf2
	9033efbb92c61       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    3 minutes ago       Running             kindnet-cni               1                   612ad708ebdd8       kindnet-hfz2n
	cbcd03c744053       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   2fd7c127c4a41       kube-proxy-7vhtd
	c3d647b251f0c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   722958adcfabc       storage-provisioner
	99df038f5fec0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   ba313dfabc818       kube-scheduler-multinode-107035
	59b18977693bc       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   18222dbd322ce       etcd-multinode-107035
	4fbff9603adb0       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   d6cdfe6f0e7b0       kube-controller-manager-multinode-107035
	bf42f4e958a03       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   526e024367c79       kube-apiserver-multinode-107035
	
	
	==> coredns [34a0cc822409bee02b0d0bc6362b3a04f348078067d48a8dbc68632856cedbc8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35248 - 54547 "HINFO IN 6502460867711106792.5678062730375160001. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018120869s
	
	
	==> describe nodes <==
	Name:               multinode-107035
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-107035
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-107035
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T01_38_24_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 01:38:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-107035
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 01:52:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 01:49:05 +0000   Thu, 29 Feb 2024 01:38:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 01:49:05 +0000   Thu, 29 Feb 2024 01:38:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 01:49:05 +0000   Thu, 29 Feb 2024 01:38:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 01:49:05 +0000   Thu, 29 Feb 2024 01:48:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    multinode-107035
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 10da1b6b30d546138a6a73e6ce850a81
	  System UUID:                10da1b6b-30d5-4613-8a6a-73e6ce850a81
	  Boot ID:                    2005b9a5-42c4-4387-8647-c85d883a8caa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-dpkx5                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-5fqf2                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-107035                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-hfz2n                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-107035             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-multinode-107035    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-7vhtd                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-107035             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m36s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node multinode-107035 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node multinode-107035 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node multinode-107035 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-107035 event: Registered Node multinode-107035 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-107035 status is now: NodeReady
	  Normal  Starting                 3m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m43s (x8 over 3m43s)  kubelet          Node multinode-107035 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s (x8 over 3m43s)  kubelet          Node multinode-107035 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s (x7 over 3m43s)  kubelet          Node multinode-107035 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m26s                  node-controller  Node multinode-107035 event: Registered Node multinode-107035 in Controller
	
	
	Name:               multinode-107035-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-107035-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-107035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_29T01_52_09_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 01:50:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-107035-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 01:52:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 01:50:27 +0000   Thu, 29 Feb 2024 01:50:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 01:50:27 +0000   Thu, 29 Feb 2024 01:50:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 01:50:27 +0000   Thu, 29 Feb 2024 01:50:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 01:50:27 +0000   Thu, 29 Feb 2024 01:50:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    multinode-107035-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b139ac4a2da49dea46a8d5507985a3c
	  System UUID:                0b139ac4-a2da-49de-a46a-8d5507985a3c
	  Boot ID:                    2a344455-5ab3-45cf-b681-1e63a4ab4406
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-4sjb2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-g9fbr               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-2vt7v            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 104s                  kube-proxy       
	  Normal   Starting                 12m                   kube-proxy       
	  Normal   NodeReady                12m                   kubelet          Node multinode-107035-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m46s                 kubelet          Node multinode-107035-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        119s (x2 over 2m59s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeHasSufficientPID     107s (x7 over 12m)    kubelet          Node multinode-107035-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    107s (x7 over 12m)    kubelet          Node multinode-107035-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  107s (x7 over 12m)    kubelet          Node multinode-107035-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 106s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  106s (x2 over 106s)   kubelet          Node multinode-107035-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x2 over 106s)   kubelet          Node multinode-107035-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x2 over 106s)   kubelet          Node multinode-107035-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  106s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                106s                  kubelet          Node multinode-107035-m02 status is now: NodeReady
	  Normal   RegisteredNode           101s                  node-controller  Node multinode-107035-m02 event: Registered Node multinode-107035-m02 in Controller
	
	
	Name:               multinode-107035-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-107035-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=multinode-107035
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_29T01_52_09_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 01:52:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-107035-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 01:52:08 +0000   Thu, 29 Feb 2024 01:52:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 01:52:08 +0000   Thu, 29 Feb 2024 01:52:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 01:52:08 +0000   Thu, 29 Feb 2024 01:52:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 01:52:08 +0000   Thu, 29 Feb 2024 01:52:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    multinode-107035-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea6993fc93eb456cbe32ca20112ad718
	  System UUID:                ea6993fc-93eb-456c-be32-ca20112ad718
	  Boot ID:                    223e0bbe-f89f-4ef9-9537-f8a7f92dcf63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-mwnbb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kindnet-tqzhh               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-fhzft            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 3s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-107035-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-107035-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-107035-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-107035-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-107035-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-107035-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-107035-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-107035-m03 status is now: NodeReady
	  Normal   NodeNotReady             75s                kubelet     Node multinode-107035-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        30s (x2 over 90s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-107035-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-107035-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-107035-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-107035-m03 status is now: NodeHasSufficientMemory
	
	
	==> dmesg <==
	[Feb29 01:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052370] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043565] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Feb29 01:48] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.401136] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.734535] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.339095] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.057381] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071002] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.174716] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.151097] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.238676] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[ +16.882556] systemd-fstab-generator[871]: Ignoring "noauto" option for root device
	[  +0.060577] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.438783] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.242396] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.360629] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [59b18977693bcb7f0a9f8322cd0669191d537a1b92b2f94e0055d5b98227ea66] <==
	{"level":"info","ts":"2024-02-29T01:48:31.507021Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T01:48:31.507265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de switched to configuration voters=(17904122316942555358)"}
	{"level":"info","ts":"2024-02-29T01:48:31.507375Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2dc4003dc2fbf749","local-member-id":"f87838631c8138de","added-peer-id":"f87838631c8138de","added-peer-peer-urls":["https://192.168.39.183:2380"]}
	{"level":"info","ts":"2024-02-29T01:48:31.50749Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2dc4003dc2fbf749","local-member-id":"f87838631c8138de","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T01:48:31.507658Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T01:48:31.526241Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T01:48:31.526379Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.183:2380"}
	{"level":"info","ts":"2024-02-29T01:48:31.528556Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.183:2380"}
	{"level":"info","ts":"2024-02-29T01:48:31.52646Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f87838631c8138de","initial-advertise-peer-urls":["https://192.168.39.183:2380"],"listen-peer-urls":["https://192.168.39.183:2380"],"advertise-client-urls":["https://192.168.39.183:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.183:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T01:48:31.526487Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T01:48:33.01416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T01:48:33.014266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T01:48:33.0143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de received MsgPreVoteResp from f87838631c8138de at term 2"}
	{"level":"info","ts":"2024-02-29T01:48:33.014329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T01:48:33.014354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de received MsgVoteResp from f87838631c8138de at term 3"}
	{"level":"info","ts":"2024-02-29T01:48:33.014381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de became leader at term 3"}
	{"level":"info","ts":"2024-02-29T01:48:33.014407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f87838631c8138de elected leader f87838631c8138de at term 3"}
	{"level":"info","ts":"2024-02-29T01:48:33.020561Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T01:48:33.021426Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T01:48:33.020472Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f87838631c8138de","local-member-attributes":"{Name:multinode-107035 ClientURLs:[https://192.168.39.183:2379]}","request-path":"/0/members/f87838631c8138de/attributes","cluster-id":"2dc4003dc2fbf749","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T01:48:33.023593Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T01:48:33.024321Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.183:2379"}
	{"level":"info","ts":"2024-02-29T01:48:33.024636Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T01:48:33.024678Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T01:48:39.270066Z","caller":"traceutil/trace.go:171","msg":"trace[581694653] transaction","detail":"{read_only:false; response_revision:807; number_of_response:1; }","duration":"115.891816ms","start":"2024-02-29T01:48:39.154162Z","end":"2024-02-29T01:48:39.270054Z","steps":["trace[581694653] 'process raft request'  (duration: 115.803689ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:52:13 up 4 min,  0 users,  load average: 0.18, 0.24, 0.11
	Linux multinode-107035 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9033efbb92c61b305b10851e975dbcc3fdf359ff85fb7ebf99e2136498d20a5b] <==
	I0229 01:51:40.755179       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0229 01:51:40.755295       1 main.go:227] handling current node
	I0229 01:51:40.755318       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0229 01:51:40.755336       1 main.go:250] Node multinode-107035-m02 has CIDR [10.244.1.0/24] 
	I0229 01:51:40.755604       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0229 01:51:40.755695       1 main.go:250] Node multinode-107035-m03 has CIDR [10.244.3.0/24] 
	I0229 01:51:50.771444       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0229 01:51:50.771757       1 main.go:227] handling current node
	I0229 01:51:50.771801       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0229 01:51:50.771847       1 main.go:250] Node multinode-107035-m02 has CIDR [10.244.1.0/24] 
	I0229 01:51:50.772064       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0229 01:51:50.772104       1 main.go:250] Node multinode-107035-m03 has CIDR [10.244.3.0/24] 
	I0229 01:52:00.785805       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0229 01:52:00.785999       1 main.go:227] handling current node
	I0229 01:52:00.786036       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0229 01:52:00.786062       1 main.go:250] Node multinode-107035-m02 has CIDR [10.244.1.0/24] 
	I0229 01:52:00.786209       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0229 01:52:00.786232       1 main.go:250] Node multinode-107035-m03 has CIDR [10.244.3.0/24] 
	I0229 01:52:10.798207       1 main.go:223] Handling node with IPs: map[192.168.39.183:{}]
	I0229 01:52:10.798377       1 main.go:227] handling current node
	I0229 01:52:10.798471       1 main.go:223] Handling node with IPs: map[192.168.39.26:{}]
	I0229 01:52:10.798497       1 main.go:250] Node multinode-107035-m02 has CIDR [10.244.1.0/24] 
	I0229 01:52:10.798715       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I0229 01:52:10.798740       1 main.go:250] Node multinode-107035-m03 has CIDR [10.244.2.0/24] 
	I0229 01:52:10.798818       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.121 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [bf42f4e958a034fd410ebf308cde5b945317bfaa80f11e963499fedc95be5c7e] <==
	I0229 01:48:34.415666       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0229 01:48:34.416602       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0229 01:48:34.416655       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0229 01:48:34.486022       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0229 01:48:34.486144       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0229 01:48:34.560427       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 01:48:34.601167       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 01:48:34.604806       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 01:48:34.608083       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 01:48:34.608134       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 01:48:34.616772       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 01:48:34.618187       1 aggregator.go:166] initial CRD sync complete...
	I0229 01:48:34.618221       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 01:48:34.618228       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 01:48:34.618235       1 cache.go:39] Caches are synced for autoregister controller
	I0229 01:48:34.620627       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 01:48:34.620735       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 01:48:34.622094       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 01:48:35.419079       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0229 01:48:37.112490       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0229 01:48:37.264308       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0229 01:48:37.276433       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0229 01:48:37.345838       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 01:48:37.356728       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0229 01:49:23.928180       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4fbff9603adb04b7a2dee1ddf10a06c56acf33459acb6bcaaecc9f0c8b8cf4d0] <==
	I0229 01:50:27.609709       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-gz4cd" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-gz4cd"
	I0229 01:50:27.626115       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-107035-m02" podCIDRs=["10.244.1.0/24"]
	I0229 01:50:27.953928       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107035-m02"
	I0229 01:50:28.136890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="4.945549ms"
	I0229 01:50:28.137577       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="122.882µs"
	I0229 01:50:28.510392       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="102.698µs"
	I0229 01:50:32.235458       1 event.go:307] "Event occurred" object="multinode-107035-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-107035-m02 event: Registered Node multinode-107035-m02 in Controller"
	I0229 01:50:36.455232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="97.062µs"
	I0229 01:50:38.338877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="178.395µs"
	I0229 01:50:38.347692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="58.969µs"
	I0229 01:50:58.951400       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107035-m02"
	I0229 01:52:04.880051       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-4sjb2"
	I0229 01:52:04.889094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="23.8892ms"
	I0229 01:52:04.900911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.749978ms"
	I0229 01:52:04.922669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="21.691964ms"
	I0229 01:52:04.922926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="52.723µs"
	I0229 01:52:05.628485       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.694852ms"
	I0229 01:52:05.629359       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="60.138µs"
	I0229 01:52:07.892157       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107035-m02"
	I0229 01:52:08.571667       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107035-m02"
	I0229 01:52:08.572060       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-mwnbb" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-mwnbb"
	I0229 01:52:08.572143       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-107035-m03\" does not exist"
	I0229 01:52:08.603144       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-107035-m03" podCIDRs=["10.244.2.0/24"]
	I0229 01:52:08.919748       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107035-m02"
	I0229 01:52:09.463478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="109.72µs"
	
	
	==> kube-proxy [cbcd03c744053e9873ccea3addf83090568b6bd919a19470eea31df2083d76d9] <==
	I0229 01:48:35.998334       1 server_others.go:69] "Using iptables proxy"
	I0229 01:48:36.008961       1 node.go:141] Successfully retrieved node IP: 192.168.39.183
	I0229 01:48:36.170747       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 01:48:36.170769       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 01:48:36.220482       1 server_others.go:152] "Using iptables Proxier"
	I0229 01:48:36.224631       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 01:48:36.225582       1 server.go:846] "Version info" version="v1.28.4"
	I0229 01:48:36.225597       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 01:48:36.240936       1 config.go:188] "Starting service config controller"
	I0229 01:48:36.240961       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 01:48:36.241205       1 config.go:97] "Starting endpoint slice config controller"
	I0229 01:48:36.241212       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 01:48:36.250805       1 config.go:315] "Starting node config controller"
	I0229 01:48:36.251209       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 01:48:36.342127       1 shared_informer.go:318] Caches are synced for service config
	I0229 01:48:36.342635       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 01:48:36.352233       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [99df038f5fec0f6148a678458f048a257aea24620a2edc83f782d7c20809163c] <==
	I0229 01:48:31.946755       1 serving.go:348] Generated self-signed cert in-memory
	W0229 01:48:34.537161       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 01:48:34.537218       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 01:48:34.537229       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 01:48:34.537236       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 01:48:34.569103       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 01:48:34.569151       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 01:48:34.571170       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 01:48:34.571251       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 01:48:34.571266       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 01:48:34.571279       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 01:48:34.671976       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 01:48:37 multinode-107035 kubelet[878]: E0229 01:48:37.151839     878 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-5fqf2" podUID="2730e330-16ca-4b2d-a5dc-330ff37ab57e"
	Feb 29 01:48:38 multinode-107035 kubelet[878]: E0229 01:48:38.673921     878 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 29 01:48:38 multinode-107035 kubelet[878]: E0229 01:48:38.673983     878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2730e330-16ca-4b2d-a5dc-330ff37ab57e-config-volume podName:2730e330-16ca-4b2d-a5dc-330ff37ab57e nodeName:}" failed. No retries permitted until 2024-02-29 01:48:42.673969976 +0000 UTC m=+12.783112098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2730e330-16ca-4b2d-a5dc-330ff37ab57e-config-volume") pod "coredns-5dd5756b68-5fqf2" (UID: "2730e330-16ca-4b2d-a5dc-330ff37ab57e") : object "kube-system"/"coredns" not registered
	Feb 29 01:48:38 multinode-107035 kubelet[878]: E0229 01:48:38.774429     878 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Feb 29 01:48:38 multinode-107035 kubelet[878]: E0229 01:48:38.774455     878 projected.go:198] Error preparing data for projected volume kube-api-access-76q7v for pod default/busybox-5b5d89c9d6-dpkx5: object "default"/"kube-root-ca.crt" not registered
	Feb 29 01:48:38 multinode-107035 kubelet[878]: E0229 01:48:38.774570     878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/209e9e3f-1414-4989-94ea-5e41052c8293-kube-api-access-76q7v podName:209e9e3f-1414-4989-94ea-5e41052c8293 nodeName:}" failed. No retries permitted until 2024-02-29 01:48:42.77455551 +0000 UTC m=+12.883697637 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76q7v" (UniqueName: "kubernetes.io/projected/209e9e3f-1414-4989-94ea-5e41052c8293-kube-api-access-76q7v") pod "busybox-5b5d89c9d6-dpkx5" (UID: "209e9e3f-1414-4989-94ea-5e41052c8293") : object "default"/"kube-root-ca.crt" not registered
	Feb 29 01:48:39 multinode-107035 kubelet[878]: E0229 01:48:39.150559     878 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5b5d89c9d6-dpkx5" podUID="209e9e3f-1414-4989-94ea-5e41052c8293"
	Feb 29 01:48:39 multinode-107035 kubelet[878]: E0229 01:48:39.150981     878 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-5fqf2" podUID="2730e330-16ca-4b2d-a5dc-330ff37ab57e"
	Feb 29 01:48:40 multinode-107035 kubelet[878]: I0229 01:48:40.666657     878 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Feb 29 01:49:06 multinode-107035 kubelet[878]: I0229 01:49:06.387737     878 scope.go:117] "RemoveContainer" containerID="c3d647b251f0c07c4e910afc030ac987f074092a1860a29c742372dcdecb4df7"
	Feb 29 01:49:30 multinode-107035 kubelet[878]: E0229 01:49:30.184930     878 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 01:49:30 multinode-107035 kubelet[878]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 01:49:30 multinode-107035 kubelet[878]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 01:49:30 multinode-107035 kubelet[878]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 01:49:30 multinode-107035 kubelet[878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 01:50:30 multinode-107035 kubelet[878]: E0229 01:50:30.184830     878 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 01:50:30 multinode-107035 kubelet[878]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 01:50:30 multinode-107035 kubelet[878]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 01:50:30 multinode-107035 kubelet[878]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 01:50:30 multinode-107035 kubelet[878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 01:51:30 multinode-107035 kubelet[878]: E0229 01:51:30.183835     878 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 01:51:30 multinode-107035 kubelet[878]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 01:51:30 multinode-107035 kubelet[878]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 01:51:30 multinode-107035 kubelet[878]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 01:51:30 multinode-107035 kubelet[878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-107035 -n multinode-107035
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-107035 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (680.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (142.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 stop
E0229 01:54:09.039703  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-107035 stop: exit status 82 (2m0.279094976s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-107035"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-107035 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-107035 status: exit status 3 (18.765127833s)

                                                
                                                
-- stdout --
	multinode-107035
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-107035-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:54:34.962574  343244 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0229 01:54:34.962616  343244 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-107035 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-107035 -n multinode-107035
E0229 01:54:37.826718  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-107035 -n multinode-107035: exit status 3 (3.195787857s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:54:38.322574  343326 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0229 01:54:38.322592  343326 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-107035" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (142.24s)

                                                
                                    
x
+
TestPreload (275.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-309501 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0229 02:04:09.040621  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 02:04:37.825306  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-309501 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m12.694119894s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-309501 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-309501 image pull gcr.io/k8s-minikube/busybox: (3.184266555s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-309501
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-309501: exit status 82 (2m0.278462818s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-309501"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-309501 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-02-29 02:07:15.421629202 +0000 UTC m=+3393.114776115
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-309501 -n test-preload-309501
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-309501 -n test-preload-309501: exit status 3 (18.554281716s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:07:33.970602  346284 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E0229 02:07:33.970624  346284 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-309501" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-309501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-309501
--- FAIL: TestPreload (275.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (373.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171039 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-171039 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m2.790080766s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-171039] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node kubernetes-upgrade-171039 in cluster kubernetes-upgrade-171039
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:10:47.867733  350297 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:10:47.867965  350297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:10:47.867975  350297 out.go:304] Setting ErrFile to fd 2...
	I0229 02:10:47.867979  350297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:10:47.868179  350297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:10:47.868771  350297 out.go:298] Setting JSON to false
	I0229 02:10:47.869822  350297 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6791,"bootTime":1709165857,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:10:47.869896  350297 start.go:139] virtualization: kvm guest
	I0229 02:10:47.872224  350297 out.go:177] * [kubernetes-upgrade-171039] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:10:47.873510  350297 notify.go:220] Checking for updates...
	I0229 02:10:47.873519  350297 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:10:47.874801  350297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:10:47.876098  350297 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:10:47.877386  350297 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:10:47.878632  350297 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:10:47.879645  350297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:10:47.881551  350297 config.go:182] Loaded profile config "NoKubernetes-424173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:10:47.881710  350297 config.go:182] Loaded profile config "offline-crio-395379": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:10:47.881829  350297 config.go:182] Loaded profile config "running-upgrade-546307": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0229 02:10:47.881945  350297 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:10:47.918952  350297 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 02:10:47.920082  350297 start.go:299] selected driver: kvm2
	I0229 02:10:47.920098  350297 start.go:903] validating driver "kvm2" against <nil>
	I0229 02:10:47.920110  350297 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:10:47.920812  350297 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:10:47.920899  350297 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:10:47.935883  350297 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:10:47.935933  350297 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:10:47.936256  350297 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 02:10:47.936351  350297 cni.go:84] Creating CNI manager for ""
	I0229 02:10:47.936369  350297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:10:47.936378  350297 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 02:10:47.936391  350297 start_flags.go:323] config:
	{Name:kubernetes-upgrade-171039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-171039 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:10:47.936580  350297 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:10:47.938128  350297 out.go:177] * Starting control plane node kubernetes-upgrade-171039 in cluster kubernetes-upgrade-171039
	I0229 02:10:47.939006  350297 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:10:47.939042  350297 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 02:10:47.939053  350297 cache.go:56] Caching tarball of preloaded images
	I0229 02:10:47.939164  350297 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 02:10:47.939176  350297 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 02:10:47.939286  350297 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/config.json ...
	I0229 02:10:47.939311  350297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/config.json: {Name:mka049ca3003dc05843094cad8cb39c115d72087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:10:47.939471  350297 start.go:365] acquiring machines lock for kubernetes-upgrade-171039: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:11:14.999113  350297 start.go:369] acquired machines lock for "kubernetes-upgrade-171039" in 27.059590989s
	I0229 02:11:14.999188  350297 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-171039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-171039 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:11:14.999364  350297 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 02:11:15.003670  350297 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:11:15.003862  350297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:11:15.003914  350297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:11:15.024273  350297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35723
	I0229 02:11:15.024731  350297 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:11:15.025314  350297 main.go:141] libmachine: Using API Version  1
	I0229 02:11:15.025340  350297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:11:15.025787  350297 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:11:15.026047  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetMachineName
	I0229 02:11:15.026270  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .DriverName
	I0229 02:11:15.026462  350297 start.go:159] libmachine.API.Create for "kubernetes-upgrade-171039" (driver="kvm2")
	I0229 02:11:15.026498  350297 client.go:168] LocalClient.Create starting
	I0229 02:11:15.026567  350297 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem
	I0229 02:11:15.026605  350297 main.go:141] libmachine: Decoding PEM data...
	I0229 02:11:15.026622  350297 main.go:141] libmachine: Parsing certificate...
	I0229 02:11:15.026703  350297 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem
	I0229 02:11:15.026723  350297 main.go:141] libmachine: Decoding PEM data...
	I0229 02:11:15.026739  350297 main.go:141] libmachine: Parsing certificate...
	I0229 02:11:15.026759  350297 main.go:141] libmachine: Running pre-create checks...
	I0229 02:11:15.026768  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .PreCreateCheck
	I0229 02:11:15.027277  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetConfigRaw
	I0229 02:11:15.027768  350297 main.go:141] libmachine: Creating machine...
	I0229 02:11:15.027787  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .Create
	I0229 02:11:15.027937  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Creating KVM machine...
	I0229 02:11:15.029266  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found existing default KVM network
	I0229 02:11:15.031797  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:15.031633  350709 network.go:210] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 02:11:15.032704  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:15.032623  350709 network.go:207] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012ddd0}
	I0229 02:11:15.038106  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | trying to create private KVM network mk-kubernetes-upgrade-171039 192.168.50.0/24...
	I0229 02:11:15.114931  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | private KVM network mk-kubernetes-upgrade-171039 192.168.50.0/24 created
	I0229 02:11:15.115081  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Setting up store path in /home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039 ...
	I0229 02:11:15.115111  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Building disk image from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 02:11:15.115159  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:15.115075  350709 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:11:15.115250  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Downloading /home/jenkins/minikube-integration/18063-316644/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:11:15.361209  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:15.361096  350709 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039/id_rsa...
	I0229 02:11:15.577968  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:15.577792  350709 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039/kubernetes-upgrade-171039.rawdisk...
	I0229 02:11:15.578002  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Writing magic tar header
	I0229 02:11:15.578022  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Writing SSH key tar header
	I0229 02:11:15.578035  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:15.577925  350709 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039 ...
	I0229 02:11:15.578057  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039
	I0229 02:11:15.578074  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines
	I0229 02:11:15.578088  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039 (perms=drwx------)
	I0229 02:11:15.578117  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines (perms=drwxr-xr-x)
	I0229 02:11:15.578129  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:11:15.578140  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube (perms=drwxr-xr-x)
	I0229 02:11:15.578158  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644 (perms=drwxrwxr-x)
	I0229 02:11:15.578172  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 02:11:15.578182  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644
	I0229 02:11:15.578199  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 02:11:15.578205  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Checking permissions on dir: /home/jenkins
	I0229 02:11:15.578211  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Checking permissions on dir: /home
	I0229 02:11:15.578217  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 02:11:15.578240  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Skipping /home - not owner
	I0229 02:11:15.578249  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Creating domain...
	I0229 02:11:15.579402  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) define libvirt domain using xml: 
	I0229 02:11:15.579428  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) <domain type='kvm'>
	I0229 02:11:15.579435  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)   <name>kubernetes-upgrade-171039</name>
	I0229 02:11:15.579444  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)   <memory unit='MiB'>2200</memory>
	I0229 02:11:15.579456  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)   <vcpu>2</vcpu>
	I0229 02:11:15.579466  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)   <features>
	I0229 02:11:15.579478  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <acpi/>
	I0229 02:11:15.579485  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <apic/>
	I0229 02:11:15.579494  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <pae/>
	I0229 02:11:15.579502  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     
	I0229 02:11:15.579508  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)   </features>
	I0229 02:11:15.579518  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)   <cpu mode='host-passthrough'>
	I0229 02:11:15.579559  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)   
	I0229 02:11:15.579588  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)   </cpu>
	I0229 02:11:15.579611  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)   <os>
	I0229 02:11:15.579637  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <type>hvm</type>
	I0229 02:11:15.579651  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <boot dev='cdrom'/>
	I0229 02:11:15.579659  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <boot dev='hd'/>
	I0229 02:11:15.579669  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <bootmenu enable='no'/>
	I0229 02:11:15.579677  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)   </os>
	I0229 02:11:15.579686  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)   <devices>
	I0229 02:11:15.579704  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <disk type='file' device='cdrom'>
	I0229 02:11:15.579719  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039/boot2docker.iso'/>
	I0229 02:11:15.579726  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)       <target dev='hdc' bus='scsi'/>
	I0229 02:11:15.579733  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)       <readonly/>
	I0229 02:11:15.579740  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     </disk>
	I0229 02:11:15.579746  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <disk type='file' device='disk'>
	I0229 02:11:15.579752  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 02:11:15.579765  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039/kubernetes-upgrade-171039.rawdisk'/>
	I0229 02:11:15.579773  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)       <target dev='hda' bus='virtio'/>
	I0229 02:11:15.579785  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     </disk>
	I0229 02:11:15.579795  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <interface type='network'>
	I0229 02:11:15.579805  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)       <source network='mk-kubernetes-upgrade-171039'/>
	I0229 02:11:15.579820  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)       <model type='virtio'/>
	I0229 02:11:15.579827  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     </interface>
	I0229 02:11:15.579833  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <interface type='network'>
	I0229 02:11:15.579841  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)       <source network='default'/>
	I0229 02:11:15.579847  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)       <model type='virtio'/>
	I0229 02:11:15.579861  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     </interface>
	I0229 02:11:15.579870  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <serial type='pty'>
	I0229 02:11:15.579874  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)       <target port='0'/>
	I0229 02:11:15.579880  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     </serial>
	I0229 02:11:15.579884  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <console type='pty'>
	I0229 02:11:15.579889  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)       <target type='serial' port='0'/>
	I0229 02:11:15.579893  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     </console>
	I0229 02:11:15.579898  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     <rng model='virtio'>
	I0229 02:11:15.579904  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)       <backend model='random'>/dev/random</backend>
	I0229 02:11:15.579948  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     </rng>
	I0229 02:11:15.579972  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     
	I0229 02:11:15.579985  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)     
	I0229 02:11:15.579996  350297 main.go:141] libmachine: (kubernetes-upgrade-171039)   </devices>
	I0229 02:11:15.580004  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) </domain>
	I0229 02:11:15.580009  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) 
	I0229 02:11:15.587269  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:0e:f0:ea in network default
	I0229 02:11:15.588062  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Ensuring networks are active...
	I0229 02:11:15.588079  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:15.588910  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Ensuring network default is active
	I0229 02:11:15.589271  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Ensuring network mk-kubernetes-upgrade-171039 is active
	I0229 02:11:15.589692  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Getting domain xml...
	I0229 02:11:15.590580  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Creating domain...
	I0229 02:11:16.875765  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Waiting to get IP...
	I0229 02:11:16.876491  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:16.876936  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:16.876976  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:16.876915  350709 retry.go:31] will retry after 257.846135ms: waiting for machine to come up
	I0229 02:11:17.136455  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:17.136951  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:17.136984  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:17.136899  350709 retry.go:31] will retry after 307.339052ms: waiting for machine to come up
	I0229 02:11:17.445399  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:17.445885  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:17.445935  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:17.445829  350709 retry.go:31] will retry after 343.329028ms: waiting for machine to come up
	I0229 02:11:17.791319  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:17.791820  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:17.791850  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:17.791764  350709 retry.go:31] will retry after 577.218594ms: waiting for machine to come up
	I0229 02:11:18.370526  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:18.371033  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:18.371058  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:18.370985  350709 retry.go:31] will retry after 729.907052ms: waiting for machine to come up
	I0229 02:11:19.103280  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:19.103877  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:19.103905  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:19.103831  350709 retry.go:31] will retry after 776.550903ms: waiting for machine to come up
	I0229 02:11:19.881876  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:19.882767  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:19.882826  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:19.882724  350709 retry.go:31] will retry after 834.004919ms: waiting for machine to come up
	I0229 02:11:20.718416  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:20.719159  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:20.719189  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:20.719056  350709 retry.go:31] will retry after 1.167333462s: waiting for machine to come up
	I0229 02:11:21.888546  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:21.888974  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:21.889004  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:21.888933  350709 retry.go:31] will retry after 1.670378186s: waiting for machine to come up
	I0229 02:11:23.561221  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:23.561673  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:23.561699  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:23.561654  350709 retry.go:31] will retry after 1.739189455s: waiting for machine to come up
	I0229 02:11:25.303143  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:25.303605  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:25.303646  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:25.303564  350709 retry.go:31] will retry after 1.804859029s: waiting for machine to come up
	I0229 02:11:27.110838  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:27.111355  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:27.111379  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:27.111318  350709 retry.go:31] will retry after 2.448863556s: waiting for machine to come up
	I0229 02:11:29.561481  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:29.562009  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:29.562032  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:29.561949  350709 retry.go:31] will retry after 2.884372229s: waiting for machine to come up
	I0229 02:11:32.447610  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:32.448058  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:32.448082  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:32.448018  350709 retry.go:31] will retry after 4.221745807s: waiting for machine to come up
	I0229 02:11:36.672273  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:36.672808  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find current IP address of domain kubernetes-upgrade-171039 in network mk-kubernetes-upgrade-171039
	I0229 02:11:36.672843  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | I0229 02:11:36.672765  350709 retry.go:31] will retry after 6.648307047s: waiting for machine to come up
	I0229 02:11:43.322389  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:43.322820  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Found IP for machine: 192.168.50.214
	I0229 02:11:43.322843  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has current primary IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:43.322852  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Reserving static IP address...
	I0229 02:11:43.323239  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-171039", mac: "52:54:00:05:e0:25", ip: "192.168.50.214"} in network mk-kubernetes-upgrade-171039
	I0229 02:11:43.401603  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Getting to WaitForSSH function...
	I0229 02:11:43.401637  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Reserved static IP address: 192.168.50.214
	I0229 02:11:43.401659  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Waiting for SSH to be available...
	I0229 02:11:43.404465  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:43.404950  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:43.404982  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:43.405148  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Using SSH client type: external
	I0229 02:11:43.405178  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039/id_rsa (-rw-------)
	I0229 02:11:43.405206  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:11:43.405223  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | About to run SSH command:
	I0229 02:11:43.405237  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | exit 0
	I0229 02:11:43.542691  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | SSH cmd err, output: <nil>: 
	I0229 02:11:43.542991  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) KVM machine creation complete!
	I0229 02:11:43.543309  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetConfigRaw
	I0229 02:11:43.543966  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .DriverName
	I0229 02:11:43.544191  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .DriverName
	I0229 02:11:43.544385  350297 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 02:11:43.544422  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetState
	I0229 02:11:43.545739  350297 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 02:11:43.545755  350297 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 02:11:43.545761  350297 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 02:11:43.545799  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHHostname
	I0229 02:11:43.548241  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:43.548601  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:43.548641  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:43.548780  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHPort
	I0229 02:11:43.548978  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:43.549121  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:43.549288  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHUsername
	I0229 02:11:43.549491  350297 main.go:141] libmachine: Using SSH client type: native
	I0229 02:11:43.549766  350297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0229 02:11:43.549780  350297 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 02:11:43.667042  350297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:11:43.667077  350297 main.go:141] libmachine: Detecting the provisioner...
	I0229 02:11:43.667089  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHHostname
	I0229 02:11:43.670328  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:43.670757  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:43.670801  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:43.670940  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHPort
	I0229 02:11:43.671175  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:43.671361  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:43.671554  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHUsername
	I0229 02:11:43.671720  350297 main.go:141] libmachine: Using SSH client type: native
	I0229 02:11:43.671962  350297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0229 02:11:43.671978  350297 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 02:11:43.787555  350297 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 02:11:43.787704  350297 main.go:141] libmachine: found compatible host: buildroot
	I0229 02:11:43.787720  350297 main.go:141] libmachine: Provisioning with buildroot...
	I0229 02:11:43.787732  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetMachineName
	I0229 02:11:43.788005  350297 buildroot.go:166] provisioning hostname "kubernetes-upgrade-171039"
	I0229 02:11:43.788029  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetMachineName
	I0229 02:11:43.788221  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHHostname
	I0229 02:11:43.790749  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:43.791086  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:43.791121  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:43.791190  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHPort
	I0229 02:11:43.791389  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:43.791561  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:43.791729  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHUsername
	I0229 02:11:43.791893  350297 main.go:141] libmachine: Using SSH client type: native
	I0229 02:11:43.792090  350297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0229 02:11:43.792108  350297 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-171039 && echo "kubernetes-upgrade-171039" | sudo tee /etc/hostname
	I0229 02:11:43.929591  350297 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-171039
	
	I0229 02:11:43.929625  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHHostname
	I0229 02:11:43.933106  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:43.933513  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:43.933553  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:43.933742  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHPort
	I0229 02:11:43.933930  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:43.934078  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:43.934241  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHUsername
	I0229 02:11:43.934456  350297 main.go:141] libmachine: Using SSH client type: native
	I0229 02:11:43.934691  350297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0229 02:11:43.934718  350297 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-171039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-171039/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-171039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:11:44.069980  350297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:11:44.070016  350297 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:11:44.070072  350297 buildroot.go:174] setting up certificates
	I0229 02:11:44.070093  350297 provision.go:83] configureAuth start
	I0229 02:11:44.070113  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetMachineName
	I0229 02:11:44.070521  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetIP
	I0229 02:11:44.073636  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.073970  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:44.073999  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.074198  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHHostname
	I0229 02:11:44.076915  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.077303  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:44.077346  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.077453  350297 provision.go:138] copyHostCerts
	I0229 02:11:44.077531  350297 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:11:44.077545  350297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:11:44.077618  350297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:11:44.077750  350297 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:11:44.077764  350297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:11:44.077796  350297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:11:44.077876  350297 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:11:44.077889  350297 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:11:44.077914  350297 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:11:44.077973  350297 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-171039 san=[192.168.50.214 192.168.50.214 localhost 127.0.0.1 minikube kubernetes-upgrade-171039]
	I0229 02:11:44.181282  350297 provision.go:172] copyRemoteCerts
	I0229 02:11:44.181358  350297 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:11:44.181394  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHHostname
	I0229 02:11:44.184362  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.184845  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:44.184879  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.185038  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHPort
	I0229 02:11:44.185243  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:44.185466  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHUsername
	I0229 02:11:44.185634  350297 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039/id_rsa Username:docker}
	I0229 02:11:44.282822  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:11:44.314454  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 02:11:44.345306  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 02:11:44.374810  350297 provision.go:86] duration metric: configureAuth took 304.696717ms
	I0229 02:11:44.374848  350297 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:11:44.375067  350297 config.go:182] Loaded profile config "kubernetes-upgrade-171039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:11:44.375176  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHHostname
	I0229 02:11:44.378387  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.378804  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:44.378833  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.379012  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHPort
	I0229 02:11:44.379223  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:44.379401  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:44.379566  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHUsername
	I0229 02:11:44.379727  350297 main.go:141] libmachine: Using SSH client type: native
	I0229 02:11:44.379894  350297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0229 02:11:44.379908  350297 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:11:44.686713  350297 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:11:44.686753  350297 main.go:141] libmachine: Checking connection to Docker...
	I0229 02:11:44.686765  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetURL
	I0229 02:11:44.688203  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | Using libvirt version 6000000
	I0229 02:11:44.690744  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.691159  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:44.691194  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.691345  350297 main.go:141] libmachine: Docker is up and running!
	I0229 02:11:44.691360  350297 main.go:141] libmachine: Reticulating splines...
	I0229 02:11:44.691368  350297 client.go:171] LocalClient.Create took 29.664857493s
	I0229 02:11:44.691403  350297 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-171039" took 29.664941437s
	I0229 02:11:44.691418  350297 start.go:300] post-start starting for "kubernetes-upgrade-171039" (driver="kvm2")
	I0229 02:11:44.691433  350297 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:11:44.691462  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .DriverName
	I0229 02:11:44.691756  350297 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:11:44.691789  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHHostname
	I0229 02:11:44.694304  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.694634  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:44.694659  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.694839  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHPort
	I0229 02:11:44.695049  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:44.695235  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHUsername
	I0229 02:11:44.695384  350297 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039/id_rsa Username:docker}
	I0229 02:11:44.790460  350297 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:11:44.795810  350297 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:11:44.795841  350297 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:11:44.795969  350297 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:11:44.796074  350297 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:11:44.796191  350297 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:11:44.806899  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:11:44.836782  350297 start.go:303] post-start completed in 145.3456ms
	I0229 02:11:44.836859  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetConfigRaw
	I0229 02:11:44.837498  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetIP
	I0229 02:11:44.840303  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.840771  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:44.840809  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.841030  350297 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/config.json ...
	I0229 02:11:44.841226  350297 start.go:128] duration metric: createHost completed in 29.841846474s
	I0229 02:11:44.841251  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHHostname
	I0229 02:11:44.843468  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.843845  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:44.843881  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.843996  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHPort
	I0229 02:11:44.844161  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:44.844317  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:44.844522  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHUsername
	I0229 02:11:44.844735  350297 main.go:141] libmachine: Using SSH client type: native
	I0229 02:11:44.844949  350297 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.214 22 <nil> <nil>}
	I0229 02:11:44.844961  350297 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 02:11:44.965417  350297 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709172704.953838440
	
	I0229 02:11:44.965441  350297 fix.go:206] guest clock: 1709172704.953838440
	I0229 02:11:44.965449  350297 fix.go:219] Guest: 2024-02-29 02:11:44.95383844 +0000 UTC Remote: 2024-02-29 02:11:44.84123983 +0000 UTC m=+57.025653524 (delta=112.59861ms)
	I0229 02:11:44.965477  350297 fix.go:190] guest clock delta is within tolerance: 112.59861ms
	I0229 02:11:44.965482  350297 start.go:83] releasing machines lock for "kubernetes-upgrade-171039", held for 29.966331775s
	I0229 02:11:44.965508  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .DriverName
	I0229 02:11:44.965823  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetIP
	I0229 02:11:44.968718  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.969089  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:44.969117  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.969405  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .DriverName
	I0229 02:11:44.970066  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .DriverName
	I0229 02:11:44.970305  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .DriverName
	I0229 02:11:44.970443  350297 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:11:44.970511  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHHostname
	I0229 02:11:44.970530  350297 ssh_runner.go:195] Run: cat /version.json
	I0229 02:11:44.970556  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHHostname
	I0229 02:11:44.973341  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.973463  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.973879  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:44.973900  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.973940  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:44.973957  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:44.974318  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHPort
	I0229 02:11:44.974490  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHPort
	I0229 02:11:44.974539  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:44.974727  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHUsername
	I0229 02:11:44.974765  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHKeyPath
	I0229 02:11:44.974969  350297 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039/id_rsa Username:docker}
	I0229 02:11:44.975006  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetSSHUsername
	I0229 02:11:44.975118  350297 sshutil.go:53] new ssh client: &{IP:192.168.50.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/kubernetes-upgrade-171039/id_rsa Username:docker}
	I0229 02:11:45.090120  350297 ssh_runner.go:195] Run: systemctl --version
	I0229 02:11:45.096708  350297 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:11:45.267558  350297 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:11:45.276348  350297 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:11:45.276414  350297 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:11:45.303528  350297 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:11:45.303554  350297 start.go:475] detecting cgroup driver to use...
	I0229 02:11:45.303626  350297 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:11:45.324296  350297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:11:45.344190  350297 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:11:45.344252  350297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:11:45.363097  350297 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:11:45.382831  350297 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:11:45.528249  350297 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:11:45.690344  350297 docker.go:233] disabling docker service ...
	I0229 02:11:45.690408  350297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:11:45.708287  350297 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:11:45.724766  350297 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:11:45.880105  350297 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:11:46.011551  350297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:11:46.028524  350297 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:11:46.049703  350297 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 02:11:46.049764  350297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:11:46.062497  350297 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:11:46.062562  350297 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:11:46.075622  350297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:11:46.088931  350297 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:11:46.102033  350297 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:11:46.115724  350297 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:11:46.127641  350297 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:11:46.127724  350297 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:11:46.144351  350297 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:11:46.155857  350297 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:11:46.281495  350297 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:11:46.438175  350297 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:11:46.438262  350297 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:11:46.444011  350297 start.go:543] Will wait 60s for crictl version
	I0229 02:11:46.444074  350297 ssh_runner.go:195] Run: which crictl
	I0229 02:11:46.448532  350297 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:11:46.489477  350297 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:11:46.489581  350297 ssh_runner.go:195] Run: crio --version
	I0229 02:11:46.521337  350297 ssh_runner.go:195] Run: crio --version
	I0229 02:11:46.562166  350297 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 02:11:46.563460  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) Calling .GetIP
	I0229 02:11:46.565971  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:46.566281  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:25", ip: ""} in network mk-kubernetes-upgrade-171039: {Iface:virbr1 ExpiryTime:2024-02-29 03:11:31 +0000 UTC Type:0 Mac:52:54:00:05:e0:25 Iaid: IPaddr:192.168.50.214 Prefix:24 Hostname:kubernetes-upgrade-171039 Clientid:01:52:54:00:05:e0:25}
	I0229 02:11:46.566305  350297 main.go:141] libmachine: (kubernetes-upgrade-171039) DBG | domain kubernetes-upgrade-171039 has defined IP address 192.168.50.214 and MAC address 52:54:00:05:e0:25 in network mk-kubernetes-upgrade-171039
	I0229 02:11:46.566503  350297 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 02:11:46.571185  350297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:11:46.586601  350297 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:11:46.586666  350297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:11:46.621953  350297 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:11:46.622022  350297 ssh_runner.go:195] Run: which lz4
	I0229 02:11:46.626754  350297 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 02:11:46.631658  350297 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:11:46.631685  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 02:11:48.439915  350297 crio.go:444] Took 1.813177 seconds to copy over tarball
	I0229 02:11:48.440012  350297 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:11:51.028023  350297 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.587962832s)
	I0229 02:11:51.028071  350297 crio.go:451] Took 2.588126 seconds to extract the tarball
	I0229 02:11:51.028086  350297 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:11:51.072175  350297 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:11:51.139161  350297 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:11:51.139192  350297 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:11:51.139283  350297 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:11:51.139282  350297 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:11:51.139301  350297 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:11:51.139319  350297 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:11:51.139330  350297 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 02:11:51.139333  350297 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:11:51.139339  350297 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:11:51.139337  350297 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 02:11:51.141058  350297 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 02:11:51.141067  350297 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:11:51.141110  350297 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 02:11:51.141122  350297 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:11:51.141058  350297 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:11:51.141051  350297 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:11:51.141170  350297 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:11:51.141308  350297 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:11:51.274166  350297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 02:11:51.325889  350297 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 02:11:51.325934  350297 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 02:11:51.325971  350297 ssh_runner.go:195] Run: which crictl
	I0229 02:11:51.330784  350297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 02:11:51.346014  350297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 02:11:51.383092  350297 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 02:11:51.410055  350297 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 02:11:51.410094  350297 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:11:51.410139  350297 ssh_runner.go:195] Run: which crictl
	I0229 02:11:51.416146  350297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 02:11:51.420817  350297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:11:51.428644  350297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:11:51.430052  350297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:11:51.444979  350297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:11:51.489579  350297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 02:11:51.524118  350297 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 02:11:51.596988  350297 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 02:11:51.597050  350297 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:11:51.597099  350297 ssh_runner.go:195] Run: which crictl
	I0229 02:11:51.608375  350297 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 02:11:51.608421  350297 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:11:51.608470  350297 ssh_runner.go:195] Run: which crictl
	I0229 02:11:51.620214  350297 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 02:11:51.620268  350297 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:11:51.620319  350297 ssh_runner.go:195] Run: which crictl
	I0229 02:11:51.631994  350297 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 02:11:51.632057  350297 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:11:51.632107  350297 ssh_runner.go:195] Run: which crictl
	I0229 02:11:51.651560  350297 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 02:11:51.651602  350297 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 02:11:51.651659  350297 ssh_runner.go:195] Run: which crictl
	I0229 02:11:51.651675  350297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:11:51.651737  350297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:11:51.651660  350297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:11:51.651748  350297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:11:51.776474  350297 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 02:11:51.776519  350297 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 02:11:51.776614  350297 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 02:11:51.776693  350297 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 02:11:51.776779  350297 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 02:11:51.816142  350297 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 02:11:52.014683  350297 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:11:52.174794  350297 cache_images.go:92] LoadImages completed in 1.035581143s
	W0229 02:11:52.174894  350297 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0229 02:11:52.174967  350297 ssh_runner.go:195] Run: crio config
	I0229 02:11:52.234384  350297 cni.go:84] Creating CNI manager for ""
	I0229 02:11:52.234412  350297 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:11:52.234432  350297 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:11:52.234459  350297 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.214 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-171039 NodeName:kubernetes-upgrade-171039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 02:11:52.234636  350297 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-171039"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-171039
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.214:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:11:52.234727  350297 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-171039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-171039 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:11:52.234778  350297 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 02:11:52.246195  350297 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:11:52.246269  350297 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:11:52.260053  350297 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0229 02:11:52.283160  350297 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:11:52.308047  350297 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2187 bytes)
	I0229 02:11:52.332637  350297 ssh_runner.go:195] Run: grep 192.168.50.214	control-plane.minikube.internal$ /etc/hosts
	I0229 02:11:52.338252  350297 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:11:52.357003  350297 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039 for IP: 192.168.50.214
	I0229 02:11:52.357049  350297 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:11:52.357216  350297 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:11:52.357273  350297 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:11:52.357354  350297 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/client.key
	I0229 02:11:52.357372  350297 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/client.crt with IP's: []
	I0229 02:11:52.480228  350297 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/client.crt ...
	I0229 02:11:52.480267  350297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/client.crt: {Name:mk8f8eb62c2fb44cd3f58bd63b90a70cff83a988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:11:52.480452  350297 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/client.key ...
	I0229 02:11:52.480470  350297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/client.key: {Name:mkd3a035dc8620242b4ed1b5b2b12b3d0ef170e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:11:52.480598  350297 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/apiserver.key.ef88201a
	I0229 02:11:52.480619  350297 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/apiserver.crt.ef88201a with IP's: [192.168.50.214 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:11:52.673141  350297 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/apiserver.crt.ef88201a ...
	I0229 02:11:52.673181  350297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/apiserver.crt.ef88201a: {Name:mk25e412f67451649ebe727d3b5770217b2d40fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:11:52.673366  350297 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/apiserver.key.ef88201a ...
	I0229 02:11:52.673385  350297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/apiserver.key.ef88201a: {Name:mk90f38d4d419cc5d111dc4557ebfbf7c10e0b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:11:52.673487  350297 certs.go:337] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/apiserver.crt.ef88201a -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/apiserver.crt
	I0229 02:11:52.673601  350297 certs.go:341] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/apiserver.key.ef88201a -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/apiserver.key
	I0229 02:11:52.673684  350297 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/proxy-client.key
	I0229 02:11:52.673707  350297 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/proxy-client.crt with IP's: []
	I0229 02:11:52.780249  350297 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/proxy-client.crt ...
	I0229 02:11:52.780294  350297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/proxy-client.crt: {Name:mkba6fb0b571f0d175c2b489bd0074d10980d002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:11:52.780510  350297 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/proxy-client.key ...
	I0229 02:11:52.780530  350297 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/proxy-client.key: {Name:mk23d3de6731ed0dafe5952e6288eac03868a97c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:11:52.780761  350297 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:11:52.780822  350297 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:11:52.780839  350297 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:11:52.780876  350297 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:11:52.780923  350297 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:11:52.780961  350297 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:11:52.781025  350297 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:11:52.781996  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:11:52.815154  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:11:52.847787  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:11:52.883178  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 02:11:52.964399  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:11:52.994477  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:11:53.021097  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:11:53.047849  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:11:53.075683  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:11:53.102066  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:11:53.132023  350297 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:11:53.160079  350297 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:11:53.179640  350297 ssh_runner.go:195] Run: openssl version
	I0229 02:11:53.186188  350297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:11:53.197774  350297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:11:53.202822  350297 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:11:53.202883  350297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:11:53.209004  350297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:11:53.220593  350297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:11:53.232983  350297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:11:53.238185  350297 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:11:53.238267  350297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:11:53.244926  350297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:11:53.260627  350297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:11:53.274064  350297 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:11:53.279653  350297 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:11:53.279709  350297 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:11:53.287990  350297 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:11:53.301300  350297 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:11:53.306710  350297 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:11:53.306768  350297 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-171039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.16.0 ClusterName:kubernetes-upgrade-171039 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:11:53.306865  350297 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:11:53.306917  350297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:11:53.353881  350297 cri.go:89] found id: ""
	I0229 02:11:53.353960  350297 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:11:53.367494  350297 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:11:53.379722  350297 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:11:53.390874  350297 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:11:53.390921  350297 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:11:53.829946  350297 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:13:52.757802  350297 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:13:52.757952  350297 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:13:52.759566  350297 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:13:52.759645  350297 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:13:52.759777  350297 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:13:52.759942  350297 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:13:52.760105  350297 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:13:52.760231  350297 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:13:52.760376  350297 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:13:52.760444  350297 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:13:52.760516  350297 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:13:52.762307  350297 out.go:204]   - Generating certificates and keys ...
	I0229 02:13:52.762418  350297 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:13:52.762513  350297 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:13:52.762600  350297 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:13:52.762686  350297 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:13:52.762777  350297 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 02:13:52.762864  350297 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 02:13:52.762950  350297 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 02:13:52.763103  350297 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-171039 localhost] and IPs [192.168.50.214 127.0.0.1 ::1]
	I0229 02:13:52.763191  350297 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 02:13:52.763362  350297 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-171039 localhost] and IPs [192.168.50.214 127.0.0.1 ::1]
	I0229 02:13:52.763448  350297 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:13:52.763541  350297 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:13:52.763623  350297 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 02:13:52.763712  350297 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:13:52.763793  350297 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:13:52.763879  350297 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:13:52.763970  350297 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:13:52.764048  350297 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:13:52.764119  350297 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:13:52.765715  350297 out.go:204]   - Booting up control plane ...
	I0229 02:13:52.765806  350297 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:13:52.765889  350297 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:13:52.765977  350297 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:13:52.766082  350297 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:13:52.766244  350297 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:13:52.766325  350297 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:13:52.766427  350297 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:13:52.766632  350297 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:13:52.766725  350297 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:13:52.766973  350297 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:13:52.767055  350297 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:13:52.767271  350297 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:13:52.767357  350297 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:13:52.767621  350297 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:13:52.767723  350297 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:13:52.767895  350297 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:13:52.767904  350297 kubeadm.go:322] 
	I0229 02:13:52.767959  350297 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:13:52.768015  350297 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:13:52.768025  350297 kubeadm.go:322] 
	I0229 02:13:52.768077  350297 kubeadm.go:322] This error is likely caused by:
	I0229 02:13:52.768135  350297 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:13:52.768281  350297 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:13:52.768297  350297 kubeadm.go:322] 
	I0229 02:13:52.768423  350297 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:13:52.768472  350297 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:13:52.768518  350297 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:13:52.768527  350297 kubeadm.go:322] 
	I0229 02:13:52.768659  350297 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:13:52.768776  350297 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:13:52.768885  350297 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:13:52.768954  350297 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:13:52.769052  350297 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:13:52.769150  350297 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 02:13:52.769250  350297 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-171039 localhost] and IPs [192.168.50.214 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-171039 localhost] and IPs [192.168.50.214 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-171039 localhost] and IPs [192.168.50.214 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-171039 localhost] and IPs [192.168.50.214 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:13:52.769320  350297 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:13:53.236478  350297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:13:53.252103  350297 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:13:53.267636  350297 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:13:53.267699  350297 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:13:53.478211  350297 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:15:49.795291  350297 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:15:49.795475  350297 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:15:49.797396  350297 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:15:49.797475  350297 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:15:49.797579  350297 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:15:49.797705  350297 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:15:49.797826  350297 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:15:49.797980  350297 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:15:49.798110  350297 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:15:49.798182  350297 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:15:49.798284  350297 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:15:49.799997  350297 out.go:204]   - Generating certificates and keys ...
	I0229 02:15:49.800080  350297 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:15:49.800162  350297 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:15:49.800275  350297 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:15:49.800363  350297 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:15:49.800451  350297 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:15:49.800524  350297 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:15:49.800590  350297 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:15:49.800641  350297 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:15:49.800716  350297 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:15:49.800807  350297 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:15:49.800843  350297 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:15:49.800889  350297 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:15:49.800936  350297 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:15:49.800980  350297 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:15:49.801032  350297 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:15:49.801077  350297 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:15:49.801132  350297 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:15:49.802424  350297 out.go:204]   - Booting up control plane ...
	I0229 02:15:49.802557  350297 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:15:49.802665  350297 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:15:49.802760  350297 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:15:49.802881  350297 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:15:49.803099  350297 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:15:49.803156  350297 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:15:49.803249  350297 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:15:49.803501  350297 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:15:49.803604  350297 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:15:49.803844  350297 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:15:49.803945  350297 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:15:49.804143  350297 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:15:49.804243  350297 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:15:49.804518  350297 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:15:49.804625  350297 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:15:49.804821  350297 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:15:49.804834  350297 kubeadm.go:322] 
	I0229 02:15:49.804894  350297 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:15:49.804951  350297 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:15:49.804960  350297 kubeadm.go:322] 
	I0229 02:15:49.805007  350297 kubeadm.go:322] This error is likely caused by:
	I0229 02:15:49.805077  350297 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:15:49.805227  350297 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:15:49.805240  350297 kubeadm.go:322] 
	I0229 02:15:49.805390  350297 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:15:49.805439  350297 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:15:49.805486  350297 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:15:49.805496  350297 kubeadm.go:322] 
	I0229 02:15:49.805631  350297 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:15:49.805762  350297 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:15:49.805890  350297 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:15:49.805933  350297 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:15:49.806038  350297 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:15:49.806148  350297 kubeadm.go:406] StartCluster complete in 3m56.499385149s
	I0229 02:15:49.806168  350297 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:15:49.806214  350297 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:15:49.806302  350297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:15:49.878656  350297 cri.go:89] found id: ""
	I0229 02:15:49.878690  350297 logs.go:276] 0 containers: []
	W0229 02:15:49.878703  350297 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:15:49.878711  350297 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:15:49.878786  350297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:15:49.958903  350297 cri.go:89] found id: ""
	I0229 02:15:49.958939  350297 logs.go:276] 0 containers: []
	W0229 02:15:49.958952  350297 logs.go:278] No container was found matching "etcd"
	I0229 02:15:49.958960  350297 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:15:49.959030  350297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:15:50.008350  350297 cri.go:89] found id: ""
	I0229 02:15:50.008384  350297 logs.go:276] 0 containers: []
	W0229 02:15:50.008398  350297 logs.go:278] No container was found matching "coredns"
	I0229 02:15:50.008407  350297 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:15:50.008477  350297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:15:50.058460  350297 cri.go:89] found id: ""
	I0229 02:15:50.058495  350297 logs.go:276] 0 containers: []
	W0229 02:15:50.058506  350297 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:15:50.058515  350297 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:15:50.058587  350297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:15:50.101581  350297 cri.go:89] found id: ""
	I0229 02:15:50.101624  350297 logs.go:276] 0 containers: []
	W0229 02:15:50.101636  350297 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:15:50.101645  350297 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:15:50.101721  350297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:15:50.141944  350297 cri.go:89] found id: ""
	I0229 02:15:50.141990  350297 logs.go:276] 0 containers: []
	W0229 02:15:50.142009  350297 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:15:50.142022  350297 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:15:50.142088  350297 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:15:50.189277  350297 cri.go:89] found id: ""
	I0229 02:15:50.189313  350297 logs.go:276] 0 containers: []
	W0229 02:15:50.189324  350297 logs.go:278] No container was found matching "kindnet"
	I0229 02:15:50.189339  350297 logs.go:123] Gathering logs for kubelet ...
	I0229 02:15:50.189357  350297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:15:50.251191  350297 logs.go:123] Gathering logs for dmesg ...
	I0229 02:15:50.251261  350297 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:15:50.268517  350297 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:15:50.268551  350297 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:15:50.412439  350297 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:15:50.412470  350297 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:15:50.412489  350297 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:15:50.528506  350297 logs.go:123] Gathering logs for container status ...
	I0229 02:15:50.528550  350297 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 02:15:50.588553  350297 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:15:50.588604  350297 out.go:239] * 
	* 
	W0229 02:15:50.588675  350297 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:15:50.588710  350297 out.go:239] * 
	* 
	W0229 02:15:50.589769  350297 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:15:50.592812  350297 out.go:177] 
	W0229 02:15:50.594204  350297 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:15:50.594283  350297 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:15:50.594320  350297 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:15:50.595768  350297 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-171039 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-171039
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-171039: (2.122243863s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-171039 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-171039 status --format={{.Host}}: exit status 7 (104.135901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171039 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-171039 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.784440662s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-171039 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171039 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-171039 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (102.308632ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-171039] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-171039
	    minikube start -p kubernetes-upgrade-171039 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1710392 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-171039 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171039 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-171039 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (24.775667869s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-29 02:16:57.60824078 +0000 UTC m=+3975.301387689
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-171039 -n kubernetes-upgrade-171039
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-171039 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-171039 logs -n 25: (1.573345613s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-424173                | NoKubernetes-424173       | jenkins | v1.32.0 | 29 Feb 24 02:11 UTC | 29 Feb 24 02:12 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-424173 sudo           | NoKubernetes-424173       | jenkins | v1.32.0 | 29 Feb 24 02:12 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-745961 stop           | minikube                  | jenkins | v1.26.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:12 UTC |
	| start   | -p stopped-upgrade-745961             | stopped-upgrade-745961    | jenkins | v1.32.0 | 29 Feb 24 02:12 UTC | 29 Feb 24 02:13 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-424173                | NoKubernetes-424173       | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:13 UTC |
	| delete  | -p running-upgrade-546307             | running-upgrade-546307    | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:13 UTC |
	| start   | -p NoKubernetes-424173                | NoKubernetes-424173       | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:13 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-283864             | cert-expiration-283864    | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:14 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-745961             | stopped-upgrade-745961    | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:13 UTC |
	| start   | -p force-systemd-flag-153144          | force-systemd-flag-153144 | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:14 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-424173 sudo           | NoKubernetes-424173       | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-424173                | NoKubernetes-424173       | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:13 UTC |
	| start   | -p cert-options-501178                | cert-options-501178       | jenkins | v1.32.0 | 29 Feb 24 02:13 UTC | 29 Feb 24 02:15 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-153144 ssh cat     | force-systemd-flag-153144 | jenkins | v1.32.0 | 29 Feb 24 02:14 UTC | 29 Feb 24 02:14 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-153144          | force-systemd-flag-153144 | jenkins | v1.32.0 | 29 Feb 24 02:14 UTC | 29 Feb 24 02:14 UTC |
	| start   | -p pause-060637 --memory=2048         | pause-060637              | jenkins | v1.32.0 | 29 Feb 24 02:14 UTC | 29 Feb 24 02:16 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-501178 ssh               | cert-options-501178       | jenkins | v1.32.0 | 29 Feb 24 02:15 UTC | 29 Feb 24 02:15 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-501178 -- sudo        | cert-options-501178       | jenkins | v1.32.0 | 29 Feb 24 02:15 UTC | 29 Feb 24 02:15 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-501178                | cert-options-501178       | jenkins | v1.32.0 | 29 Feb 24 02:15 UTC | 29 Feb 24 02:15 UTC |
	| start   | -p auto-117441 --memory=3072          | auto-117441               | jenkins | v1.32.0 | 29 Feb 24 02:15 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-171039          | kubernetes-upgrade-171039 | jenkins | v1.32.0 | 29 Feb 24 02:15 UTC | 29 Feb 24 02:15 UTC |
	| start   | -p kubernetes-upgrade-171039          | kubernetes-upgrade-171039 | jenkins | v1.32.0 | 29 Feb 24 02:15 UTC | 29 Feb 24 02:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-171039          | kubernetes-upgrade-171039 | jenkins | v1.32.0 | 29 Feb 24 02:16 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-171039          | kubernetes-upgrade-171039 | jenkins | v1.32.0 | 29 Feb 24 02:16 UTC | 29 Feb 24 02:16 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-060637                       | pause-060637              | jenkins | v1.32.0 | 29 Feb 24 02:16 UTC |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:16:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:16:39.586534  354661 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:16:39.586988  354661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:16:39.587000  354661 out.go:304] Setting ErrFile to fd 2...
	I0229 02:16:39.587007  354661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:16:39.587428  354661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:16:39.588602  354661 out.go:298] Setting JSON to false
	I0229 02:16:39.589734  354661 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7143,"bootTime":1709165857,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:16:39.589809  354661 start.go:139] virtualization: kvm guest
	I0229 02:16:39.591743  354661 out.go:177] * [pause-060637] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:16:39.593294  354661 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:16:39.593344  354661 notify.go:220] Checking for updates...
	I0229 02:16:39.594464  354661 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:16:39.595783  354661 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:16:39.597076  354661 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:16:39.598357  354661 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:16:39.599508  354661 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:16:39.600979  354661 config.go:182] Loaded profile config "pause-060637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:16:39.601437  354661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:16:39.601519  354661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:16:39.617683  354661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39161
	I0229 02:16:39.618206  354661 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:16:39.618944  354661 main.go:141] libmachine: Using API Version  1
	I0229 02:16:39.618975  354661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:16:39.619314  354661 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:16:39.619537  354661 main.go:141] libmachine: (pause-060637) Calling .DriverName
	I0229 02:16:39.619837  354661 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:16:39.620142  354661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:16:39.620187  354661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:16:39.635582  354661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37137
	I0229 02:16:39.635929  354661 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:16:39.636423  354661 main.go:141] libmachine: Using API Version  1
	I0229 02:16:39.636455  354661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:16:39.636818  354661 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:16:39.637009  354661 main.go:141] libmachine: (pause-060637) Calling .DriverName
	I0229 02:16:39.669760  354661 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 02:16:39.670967  354661 start.go:299] selected driver: kvm2
	I0229 02:16:39.670984  354661 start.go:903] validating driver "kvm2" against &{Name:pause-060637 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:pause-060637 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:16:39.671148  354661 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:16:39.671505  354661 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:16:39.671591  354661 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:16:39.687309  354661 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:16:39.688113  354661 cni.go:84] Creating CNI manager for ""
	I0229 02:16:39.688132  354661 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:16:39.688142  354661 start_flags.go:323] config:
	{Name:pause-060637 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-060637 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.169 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:16:39.688357  354661 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:16:39.690007  354661 out.go:177] * Starting control plane node pause-060637 in cluster pause-060637
	I0229 02:16:39.691286  354661 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:16:39.691332  354661 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 02:16:39.691347  354661 cache.go:56] Caching tarball of preloaded images
	I0229 02:16:39.691448  354661 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 02:16:39.691464  354661 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 02:16:39.691606  354661 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/pause-060637/config.json ...
	I0229 02:16:39.691786  354661 start.go:365] acquiring machines lock for pause-060637: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:16:39.691826  354661 start.go:369] acquired machines lock for "pause-060637" in 21.508µs
	I0229 02:16:39.691840  354661 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:16:39.691849  354661 fix.go:54] fixHost starting: 
	I0229 02:16:39.692093  354661 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:16:39.692126  354661 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:16:39.706270  354661 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0229 02:16:39.706730  354661 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:16:39.707194  354661 main.go:141] libmachine: Using API Version  1
	I0229 02:16:39.707216  354661 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:16:39.707582  354661 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:16:39.707795  354661 main.go:141] libmachine: (pause-060637) Calling .DriverName
	I0229 02:16:39.707993  354661 main.go:141] libmachine: (pause-060637) Calling .GetState
	I0229 02:16:39.709504  354661 fix.go:102] recreateIfNeeded on pause-060637: state=Running err=<nil>
	W0229 02:16:39.709525  354661 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:16:39.711211  354661 out.go:177] * Updating the running kvm2 "pause-060637" VM ...
	I0229 02:16:37.315650  353912 pod_ready.go:102] pod "coredns-5dd5756b68-h424z" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:39.813110  353912 pod_ready.go:102] pod "coredns-5dd5756b68-h424z" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:41.813517  353912 pod_ready.go:102] pod "coredns-5dd5756b68-h424z" in "kube-system" namespace has status "Ready":"False"
	I0229 02:16:37.922344  354599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:16:37.983810  354599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kubernetes-upgrade-171039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 02:16:38.018393  354599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:16:38.049217  354599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:16:38.084523  354599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:16:38.118283  354599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:16:38.155454  354599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:16:38.186194  354599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:16:38.216202  354599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:16:38.246196  354599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:16:38.266825  354599 ssh_runner.go:195] Run: openssl version
	I0229 02:16:38.274358  354599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:16:38.288167  354599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:16:38.295081  354599 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:16:38.295158  354599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:16:38.302045  354599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:16:38.315515  354599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:16:38.328491  354599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:16:38.333657  354599 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:16:38.333719  354599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:16:38.340691  354599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:16:38.351745  354599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:16:38.366047  354599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:16:38.371649  354599 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:16:38.371707  354599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:16:38.378921  354599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:16:38.389522  354599 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:16:38.395037  354599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:16:38.401357  354599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:16:38.407606  354599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:16:38.413978  354599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:16:38.420690  354599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:16:38.427047  354599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:16:38.433548  354599 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-171039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-171039 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.214 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:16:38.433651  354599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:16:38.433698  354599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:16:38.481645  354599 cri.go:89] found id: "db6fc4565983bb728ec59494275c8c45344cc92bd49e7bda3c18f2c485b9e44e"
	I0229 02:16:38.481669  354599 cri.go:89] found id: "0e372774b06c91d7d212790eeda92aa9bf40ef4cfb13de40a7ac35a72b7952f8"
	I0229 02:16:38.481673  354599 cri.go:89] found id: "5cc1ea7262087d31fba747c766bfb891a0f8463a29e0a7af83614529835a6fcc"
	I0229 02:16:38.481676  354599 cri.go:89] found id: "eafa0ad559e6418ab9358d776b8ad48862c85a1e1015d8a38e25a85ca21263e2"
	I0229 02:16:38.481708  354599 cri.go:89] found id: ""
	I0229 02:16:38.481767  354599 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.526256684Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8fe3083-3b12-4fa8-a0bd-b1c016f27420 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.528155325Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0dfd6cbf-57f2-42ae-8152-a08cab0c3c82 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.528956595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709173018528928895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0dfd6cbf-57f2-42ae-8152-a08cab0c3c82 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.529636524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7904110-acd4-4152-9b03-027c2e890937 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.529733092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7904110-acd4-4152-9b03-027c2e890937 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.530009633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac1250682a0282daafad5111b7761f34e374342ac55043a561f9e53ecd0a083d,PodSandboxId:ff0402d9a0e93abd0b9eec3d47c6aa211f67cda77ce1d470f89c01b3b23be35a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709173011093649247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9dc93a43490d3ca742c0e97d30300,},Annotations:map[string]string{io.kubernetes.container.hash: 302034b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17f31fbeaf63b106e9395465374ad77cb91c53ae38c4435b0d989cb92782238d,PodSandboxId:848fea119e74117b77f99ceecc0da9b81fbfbfc11e89d629273efcb0631572d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709173011126302385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed8bf2e4b6c33c198d309cf83de36ca,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcb287d507f2d34a1b86dbc306558852868fa12907d50e94fded7224fbc6cae4,PodSandboxId:4af7df55e6a08780838e832646930292b8db0701880d95ea4b705d49d5765dbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709173011085132287,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5e84b16798ce00bbef6bee241c2ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 3,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fbb27f9c87060302ca8be5a323c61312b962097e8c2da17d3cb95aa3a5492c1,PodSandboxId:89d5d778caaf29eb1c775718caaec204a58ade40e32fb325428deba1e243f477,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709173011072161363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1adc7ab0e465274b650b81161d8cf486,},Annotations:map[string]string{io.kubernetes.container.hash: b277b5f8,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7cb7d72fcdb1195389bcb1a916e1b6321c66c35a3f397f3321464a924d4ffd,PodSandboxId:4af7df55e6a08780838e832646930292b8db0701880d95ea4b705d49d5765dbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1709173007001022230,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5e84b16798ce00bbef6bee241c2ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6fc4565983bb728ec59494275c8c45344cc92bd49e7bda3c18f2c485b9e44e,PodSandboxId:9618b37c25f855db9e2b3da52e38a0f3aea3694dc3a1080d1f4f6148e097e5e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1709172994930104214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed8bf2e4b6c33c198d309cf83de36ca,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cc1ea7262087d31fba747c766bfb891a0f8463a29e0a7af83614529835a6fcc,PodSandboxId:fea50d732fd63a8b899638eae1ab2714aebfdb7e25f379b4016d2a494ea7686b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709172994823033219,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1adc7ab0e465274b650b81161d8cf486,},Annotations:map[string]string{io.kubernetes.container.hash: b277b5f8,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafa0ad559e6418ab9358d776b8ad48862c85a1e1015d8a38e25a85ca21263e2,PodSandboxId:85236698073cc9c371a5ee352a0dad8a086fb9b1c21a68d5e71cc99995777fa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1709172994792765431,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9dc93a43490d3ca742c0e97d30300,},Annotations:map[string]string{io.kubernetes.container.hash: 302034b9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7904110-acd4-4152-9b03-027c2e890937 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.595938118Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f04f55ee-ff57-410d-ab1b-cf6379a9897f name=/runtime.v1.RuntimeService/Version
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.596036071Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f04f55ee-ff57-410d-ab1b-cf6379a9897f name=/runtime.v1.RuntimeService/Version
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.597894474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=220ea859-7b08-46d0-97fc-b2f42dbd498e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.599145914Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709173018598772198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=220ea859-7b08-46d0-97fc-b2f42dbd498e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.600573117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fc8f5bb-5b48-40f8-8b17-b7ae3ebf3dca name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.600642461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fc8f5bb-5b48-40f8-8b17-b7ae3ebf3dca name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.601056351Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac1250682a0282daafad5111b7761f34e374342ac55043a561f9e53ecd0a083d,PodSandboxId:ff0402d9a0e93abd0b9eec3d47c6aa211f67cda77ce1d470f89c01b3b23be35a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709173011093649247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9dc93a43490d3ca742c0e97d30300,},Annotations:map[string]string{io.kubernetes.container.hash: 302034b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17f31fbeaf63b106e9395465374ad77cb91c53ae38c4435b0d989cb92782238d,PodSandboxId:848fea119e74117b77f99ceecc0da9b81fbfbfc11e89d629273efcb0631572d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709173011126302385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed8bf2e4b6c33c198d309cf83de36ca,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcb287d507f2d34a1b86dbc306558852868fa12907d50e94fded7224fbc6cae4,PodSandboxId:4af7df55e6a08780838e832646930292b8db0701880d95ea4b705d49d5765dbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709173011085132287,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5e84b16798ce00bbef6bee241c2ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 3,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fbb27f9c87060302ca8be5a323c61312b962097e8c2da17d3cb95aa3a5492c1,PodSandboxId:89d5d778caaf29eb1c775718caaec204a58ade40e32fb325428deba1e243f477,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709173011072161363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1adc7ab0e465274b650b81161d8cf486,},Annotations:map[string]string{io.kubernetes.container.hash: b277b5f8,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7cb7d72fcdb1195389bcb1a916e1b6321c66c35a3f397f3321464a924d4ffd,PodSandboxId:4af7df55e6a08780838e832646930292b8db0701880d95ea4b705d49d5765dbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1709173007001022230,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5e84b16798ce00bbef6bee241c2ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6fc4565983bb728ec59494275c8c45344cc92bd49e7bda3c18f2c485b9e44e,PodSandboxId:9618b37c25f855db9e2b3da52e38a0f3aea3694dc3a1080d1f4f6148e097e5e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1709172994930104214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed8bf2e4b6c33c198d309cf83de36ca,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cc1ea7262087d31fba747c766bfb891a0f8463a29e0a7af83614529835a6fcc,PodSandboxId:fea50d732fd63a8b899638eae1ab2714aebfdb7e25f379b4016d2a494ea7686b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709172994823033219,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1adc7ab0e465274b650b81161d8cf486,},Annotations:map[string]string{io.kubernetes.container.hash: b277b5f8,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafa0ad559e6418ab9358d776b8ad48862c85a1e1015d8a38e25a85ca21263e2,PodSandboxId:85236698073cc9c371a5ee352a0dad8a086fb9b1c21a68d5e71cc99995777fa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1709172994792765431,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9dc93a43490d3ca742c0e97d30300,},Annotations:map[string]string{io.kubernetes.container.hash: 302034b9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fc8f5bb-5b48-40f8-8b17-b7ae3ebf3dca name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.648311653Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1131e1d5-d1a5-4b21-9a27-7d89bb79544f name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.648486461Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ff0402d9a0e93abd0b9eec3d47c6aa211f67cda77ce1d470f89c01b3b23be35a,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-171039,Uid:d1f9dc93a43490d3ca742c0e97d30300,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1709172997675610012,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9dc93a43490d3ca742c0e97d30300,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.214:2379,kubernetes.io/config.hash: d1f9dc93a43490d3ca742c0e97d30300,kubernetes.io/config.seen: 2024-02-29T02:16:21.966660912Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:89d5d778caaf29eb1c775718caaec204a58ade40e3
2fb325428deba1e243f477,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-171039,Uid:1adc7ab0e465274b650b81161d8cf486,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1709172997657693604,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1adc7ab0e465274b650b81161d8cf486,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.214:8443,kubernetes.io/config.hash: 1adc7ab0e465274b650b81161d8cf486,kubernetes.io/config.seen: 2024-02-29T02:16:21.966652735Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:848fea119e74117b77f99ceecc0da9b81fbfbfc11e89d629273efcb0631572d1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-171039,Uid:2ed8bf2e4b6c33c198d309cf83de36ca,Namespace:kube-system,Attempt:2,},State:SANDBOX_
READY,CreatedAt:1709172997646787157,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed8bf2e4b6c33c198d309cf83de36ca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2ed8bf2e4b6c33c198d309cf83de36ca,kubernetes.io/config.seen: 2024-02-29T02:16:21.966658140Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4af7df55e6a08780838e832646930292b8db0701880d95ea4b705d49d5765dbc,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-171039,Uid:6b5e84b16798ce00bbef6bee241c2ee9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1709172997606066748,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5e84b16
798ce00bbef6bee241c2ee9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6b5e84b16798ce00bbef6bee241c2ee9,kubernetes.io/config.seen: 2024-02-29T02:16:21.966659675Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1131e1d5-d1a5-4b21-9a27-7d89bb79544f name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.649133855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82defa35-7101-4a02-9ffe-e305154dd87e name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.649392639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82defa35-7101-4a02-9ffe-e305154dd87e name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.649535566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac1250682a0282daafad5111b7761f34e374342ac55043a561f9e53ecd0a083d,PodSandboxId:ff0402d9a0e93abd0b9eec3d47c6aa211f67cda77ce1d470f89c01b3b23be35a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709173011093649247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9dc93a43490d3ca742c0e97d30300,},Annotations:map[string]string{io.kubernetes.container.hash: 302034b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17f31fbeaf63b106e9395465374ad77cb91c53ae38c4435b0d989cb92782238d,PodSandboxId:848fea119e74117b77f99ceecc0da9b81fbfbfc11e89d629273efcb0631572d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709173011126302385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed8bf2e4b6c33c198d309cf83de36ca,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcb287d507f2d34a1b86dbc306558852868fa12907d50e94fded7224fbc6cae4,PodSandboxId:4af7df55e6a08780838e832646930292b8db0701880d95ea4b705d49d5765dbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709173011085132287,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5e84b16798ce00bbef6bee241c2ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 3,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fbb27f9c87060302ca8be5a323c61312b962097e8c2da17d3cb95aa3a5492c1,PodSandboxId:89d5d778caaf29eb1c775718caaec204a58ade40e32fb325428deba1e243f477,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709173011072161363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1adc7ab0e465274b650b81161d8cf486,},Annotations:map[string]string{io.kubernetes.container.hash: b277b5f8,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82defa35-7101-4a02-9ffe-e305154dd87e name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.653461588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75fed8cc-e06d-4550-9472-97bb995ae602 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.653515960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75fed8cc-e06d-4550-9472-97bb995ae602 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.654710985Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8747c08e-9974-42b7-836f-a056535018ef name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.655318351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709173018655297120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8747c08e-9974-42b7-836f-a056535018ef name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.656002274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b517c65-d329-4515-9188-ca76e1206f71 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.656138792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b517c65-d329-4515-9188-ca76e1206f71 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:16:58 kubernetes-upgrade-171039 crio[1818]: time="2024-02-29 02:16:58.656328849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac1250682a0282daafad5111b7761f34e374342ac55043a561f9e53ecd0a083d,PodSandboxId:ff0402d9a0e93abd0b9eec3d47c6aa211f67cda77ce1d470f89c01b3b23be35a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709173011093649247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9dc93a43490d3ca742c0e97d30300,},Annotations:map[string]string{io.kubernetes.container.hash: 302034b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17f31fbeaf63b106e9395465374ad77cb91c53ae38c4435b0d989cb92782238d,PodSandboxId:848fea119e74117b77f99ceecc0da9b81fbfbfc11e89d629273efcb0631572d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709173011126302385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed8bf2e4b6c33c198d309cf83de36ca,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcb287d507f2d34a1b86dbc306558852868fa12907d50e94fded7224fbc6cae4,PodSandboxId:4af7df55e6a08780838e832646930292b8db0701880d95ea4b705d49d5765dbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709173011085132287,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5e84b16798ce00bbef6bee241c2ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 3,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fbb27f9c87060302ca8be5a323c61312b962097e8c2da17d3cb95aa3a5492c1,PodSandboxId:89d5d778caaf29eb1c775718caaec204a58ade40e32fb325428deba1e243f477,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709173011072161363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1adc7ab0e465274b650b81161d8cf486,},Annotations:map[string]string{io.kubernetes.container.hash: b277b5f8,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7cb7d72fcdb1195389bcb1a916e1b6321c66c35a3f397f3321464a924d4ffd,PodSandboxId:4af7df55e6a08780838e832646930292b8db0701880d95ea4b705d49d5765dbc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1709173007001022230,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5e84b16798ce00bbef6bee241c2ee9,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db6fc4565983bb728ec59494275c8c45344cc92bd49e7bda3c18f2c485b9e44e,PodSandboxId:9618b37c25f855db9e2b3da52e38a0f3aea3694dc3a1080d1f4f6148e097e5e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1709172994930104214,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ed8bf2e4b6c33c198d309cf83de36ca,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cc1ea7262087d31fba747c766bfb891a0f8463a29e0a7af83614529835a6fcc,PodSandboxId:fea50d732fd63a8b899638eae1ab2714aebfdb7e25f379b4016d2a494ea7686b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1709172994823033219,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1adc7ab0e465274b650b81161d8cf486,},Annotations:map[string]string{io.kubernetes.container.hash: b277b5f8,io.kubernetes.container.restartCount: 1,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eafa0ad559e6418ab9358d776b8ad48862c85a1e1015d8a38e25a85ca21263e2,PodSandboxId:85236698073cc9c371a5ee352a0dad8a086fb9b1c21a68d5e71cc99995777fa7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1709172994792765431,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9dc93a43490d3ca742c0e97d30300,},Annotations:map[string]string{io.kubernetes.container.hash: 302034b9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b517c65-d329-4515-9188-ca76e1206f71 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	17f31fbeaf63b       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   7 seconds ago       Running             kube-controller-manager   2                   848fea119e741       kube-controller-manager-kubernetes-upgrade-171039
	ac1250682a028       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   7 seconds ago       Running             etcd                      2                   ff0402d9a0e93       etcd-kubernetes-upgrade-171039
	fcb287d507f2d       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   7 seconds ago       Running             kube-scheduler            3                   4af7df55e6a08       kube-scheduler-kubernetes-upgrade-171039
	1fbb27f9c8706       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   7 seconds ago       Running             kube-apiserver            2                   89d5d778caaf2       kube-apiserver-kubernetes-upgrade-171039
	1e7cb7d72fcdb       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   11 seconds ago      Exited              kube-scheduler            2                   4af7df55e6a08       kube-scheduler-kubernetes-upgrade-171039
	db6fc4565983b       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   23 seconds ago      Exited              kube-controller-manager   1                   9618b37c25f85       kube-controller-manager-kubernetes-upgrade-171039
	5cc1ea7262087       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   23 seconds ago      Exited              kube-apiserver            1                   fea50d732fd63       kube-apiserver-kubernetes-upgrade-171039
	eafa0ad559e64       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   23 seconds ago      Exited              etcd                      1                   85236698073cc       etcd-kubernetes-upgrade-171039
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-171039
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-171039
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:16:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-171039
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:16:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:16:54 +0000   Thu, 29 Feb 2024 02:16:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:16:54 +0000   Thu, 29 Feb 2024 02:16:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:16:54 +0000   Thu, 29 Feb 2024 02:16:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:16:54 +0000   Thu, 29 Feb 2024 02:16:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.214
	  Hostname:    kubernetes-upgrade-171039
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 eec2b9671d4e4a2ca5b0efa3dc4489c9
	  System UUID:                eec2b967-1d4e-4a2c-a5b0-efa3dc4489c9
	  Boot ID:                    484ef35a-76f3-49d5-b5d6-7ff6d1f0189e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-171039                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         26s
	  kube-system                 kube-apiserver-kubernetes-upgrade-171039             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-171039    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-scheduler-kubernetes-upgrade-171039             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 37s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet  Node kubernetes-upgrade-171039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet  Node kubernetes-upgrade-171039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x7 over 36s)  kubelet  Node kubernetes-upgrade-171039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  36s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 8s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-171039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-171039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet  Node kubernetes-upgrade-171039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063559] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047546] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Feb29 02:16] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.473492] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.745638] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.950089] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.088885] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074571] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.193306] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.157902] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.293014] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +7.495748] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.069996] kauditd_printk_skb: 130 callbacks suppressed
	[ +12.998666] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.043914] systemd-fstab-generator[1735]: Ignoring "noauto" option for root device
	[  +0.187154] systemd-fstab-generator[1748]: Ignoring "noauto" option for root device
	[  +0.238064] systemd-fstab-generator[1763]: Ignoring "noauto" option for root device
	[  +0.208513] systemd-fstab-generator[1775]: Ignoring "noauto" option for root device
	[  +0.344040] systemd-fstab-generator[1802]: Ignoring "noauto" option for root device
	[ +10.398265] kauditd_printk_skb: 197 callbacks suppressed
	[  +3.253765] systemd-fstab-generator[2327]: Ignoring "noauto" option for root device
	
	
	==> etcd [ac1250682a0282daafad5111b7761f34e374342ac55043a561f9e53ecd0a083d] <==
	{"level":"info","ts":"2024-02-29T02:16:51.560915Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T02:16:51.560488Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T02:16:51.561665Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"3dc9612c0afb3334","initial-advertise-peer-urls":["https://192.168.50.214:2380"],"listen-peer-urls":["https://192.168.50.214:2380"],"advertise-client-urls":["https://192.168.50.214:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.214:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T02:16:51.536946Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-02-29T02:16:51.560552Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.214:2380"}
	{"level":"info","ts":"2024-02-29T02:16:51.555373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 switched to configuration voters=(4452196548423136052)"}
	{"level":"info","ts":"2024-02-29T02:16:51.56406Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.214:2380"}
	{"level":"info","ts":"2024-02-29T02:16:51.564308Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6c00e6cf347ec681","local-member-id":"3dc9612c0afb3334","added-peer-id":"3dc9612c0afb3334","added-peer-peer-urls":["https://192.168.50.214:2380"]}
	{"level":"info","ts":"2024-02-29T02:16:51.565452Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c00e6cf347ec681","local-member-id":"3dc9612c0afb3334","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:16:51.56394Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T02:16:51.56812Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:16:53.401189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 is starting a new election at term 3"}
	{"level":"info","ts":"2024-02-29T02:16:53.401295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-02-29T02:16:53.40135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 received MsgPreVoteResp from 3dc9612c0afb3334 at term 3"}
	{"level":"info","ts":"2024-02-29T02:16:53.401386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 became candidate at term 4"}
	{"level":"info","ts":"2024-02-29T02:16:53.401411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 received MsgVoteResp from 3dc9612c0afb3334 at term 4"}
	{"level":"info","ts":"2024-02-29T02:16:53.401438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 became leader at term 4"}
	{"level":"info","ts":"2024-02-29T02:16:53.401463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dc9612c0afb3334 elected leader 3dc9612c0afb3334 at term 4"}
	{"level":"info","ts":"2024-02-29T02:16:53.407138Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3dc9612c0afb3334","local-member-attributes":"{Name:kubernetes-upgrade-171039 ClientURLs:[https://192.168.50.214:2379]}","request-path":"/0/members/3dc9612c0afb3334/attributes","cluster-id":"6c00e6cf347ec681","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:16:53.407231Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:16:53.407491Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:16:53.407562Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:16:53.407582Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:16:53.409578Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.214:2379"}
	{"level":"info","ts":"2024-02-29T02:16:53.409733Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [eafa0ad559e6418ab9358d776b8ad48862c85a1e1015d8a38e25a85ca21263e2] <==
	{"level":"info","ts":"2024-02-29T02:16:35.18309Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"3dc9612c0afb3334","initial-advertise-peer-urls":["https://192.168.50.214:2380"],"listen-peer-urls":["https://192.168.50.214:2380"],"advertise-client-urls":["https://192.168.50.214:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.214:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T02:16:36.436951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T02:16:36.437009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:16:36.43704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 received MsgPreVoteResp from 3dc9612c0afb3334 at term 2"}
	{"level":"info","ts":"2024-02-29T02:16:36.437053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T02:16:36.437058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 received MsgVoteResp from 3dc9612c0afb3334 at term 3"}
	{"level":"info","ts":"2024-02-29T02:16:36.437067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3dc9612c0afb3334 became leader at term 3"}
	{"level":"info","ts":"2024-02-29T02:16:36.437075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3dc9612c0afb3334 elected leader 3dc9612c0afb3334 at term 3"}
	{"level":"info","ts":"2024-02-29T02:16:36.439138Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3dc9612c0afb3334","local-member-attributes":"{Name:kubernetes-upgrade-171039 ClientURLs:[https://192.168.50.214:2379]}","request-path":"/0/members/3dc9612c0afb3334/attributes","cluster-id":"6c00e6cf347ec681","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:16:36.439197Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:16:36.441767Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:16:36.443423Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:16:36.443737Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:16:36.443939Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:16:36.452116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.214:2379"}
	{"level":"info","ts":"2024-02-29T02:16:36.805225Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-29T02:16:36.806902Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-171039","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.214:2380"],"advertise-client-urls":["https://192.168.50.214:2379"]}
	{"level":"warn","ts":"2024-02-29T02:16:36.808Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T02:16:36.808302Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T02:16:36.825919Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.214:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T02:16:36.825991Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.214:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-29T02:16:36.826054Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3dc9612c0afb3334","current-leader-member-id":"3dc9612c0afb3334"}
	{"level":"info","ts":"2024-02-29T02:16:36.844576Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.214:2380"}
	{"level":"info","ts":"2024-02-29T02:16:36.848107Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.214:2380"}
	{"level":"info","ts":"2024-02-29T02:16:36.848149Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-171039","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.214:2380"],"advertise-client-urls":["https://192.168.50.214:2379"]}
	
	
	==> kernel <==
	 02:16:59 up 1 min,  0 users,  load average: 1.64, 0.47, 0.16
	Linux kubernetes-upgrade-171039 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1fbb27f9c87060302ca8be5a323c61312b962097e8c2da17d3cb95aa3a5492c1] <==
	I0229 02:16:54.759626       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0229 02:16:54.759966       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0229 02:16:54.690499       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0229 02:16:54.760952       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0229 02:16:54.761007       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0229 02:16:54.761032       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 02:16:54.762937       1 aggregator.go:165] initial CRD sync complete...
	I0229 02:16:54.762986       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 02:16:54.763010       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 02:16:54.796087       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 02:16:54.850627       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 02:16:54.863118       1 cache.go:39] Caches are synced for autoregister controller
	I0229 02:16:54.885413       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 02:16:54.886348       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 02:16:54.886501       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0229 02:16:54.886532       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0229 02:16:54.889961       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 02:16:54.890522       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0229 02:16:54.904452       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0229 02:16:55.694630       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0229 02:16:56.291185       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0229 02:16:56.303247       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0229 02:16:56.333588       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0229 02:16:56.363639       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 02:16:56.370678       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [5cc1ea7262087d31fba747c766bfb891a0f8463a29e0a7af83614529835a6fcc] <==
	I0229 02:16:35.332374       1 options.go:222] external host was not specified, using 192.168.50.214
	I0229 02:16:35.333385       1 server.go:148] Version: v1.29.0-rc.2
	I0229 02:16:35.333458       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [17f31fbeaf63b106e9395465374ad77cb91c53ae38c4435b0d989cb92782238d] <==
	I0229 02:16:58.165220       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0229 02:16:58.165238       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0229 02:16:58.165259       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0229 02:16:58.165380       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0229 02:16:58.422142       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0229 02:16:58.422224       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0229 02:16:58.422235       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0229 02:16:58.566108       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0229 02:16:58.566208       1 stateful_set.go:161] "Starting stateful set controller"
	I0229 02:16:58.566216       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0229 02:16:58.717124       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0229 02:16:58.717428       1 ttl_controller.go:124] "Starting TTL controller"
	I0229 02:16:58.717524       1 shared_informer.go:311] Waiting for caches to sync for TTL
	E0229 02:16:58.763687       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0229 02:16:58.763708       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0229 02:16:58.927255       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0229 02:16:58.927444       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0229 02:16:58.927489       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0229 02:16:59.066539       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0229 02:16:59.066600       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0229 02:16:59.066709       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0229 02:16:59.066719       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0229 02:16:59.217595       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0229 02:16:59.217795       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0229 02:16:59.217905       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	
	
	==> kube-controller-manager [db6fc4565983bb728ec59494275c8c45344cc92bd49e7bda3c18f2c485b9e44e] <==
	I0229 02:16:36.156730       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [1e7cb7d72fcdb1195389bcb1a916e1b6321c66c35a3f397f3321464a924d4ffd] <==
	E0229 02:16:48.344736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.50.214:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	W0229 02:16:48.345183       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.214:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	E0229 02:16:48.345247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.214:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	W0229 02:16:48.345284       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://192.168.50.214:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	E0229 02:16:48.345342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.50.214:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	W0229 02:16:48.345382       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: Get "https://192.168.50.214:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	E0229 02:16:48.345448       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.50.214:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	W0229 02:16:48.345748       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.50.214:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	E0229 02:16:48.345873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.50.214:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	W0229 02:16:48.345930       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: Get "https://192.168.50.214:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	E0229 02:16:48.346010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.50.214:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	W0229 02:16:48.346030       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: Get "https://192.168.50.214:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	E0229 02:16:48.346056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.50.214:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	W0229 02:16:48.346235       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://192.168.50.214:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	W0229 02:16:48.346249       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.214:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	W0229 02:16:48.346920       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: Get "https://192.168.50.214:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	E0229 02:16:48.346971       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.50.214:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	E0229 02:16:48.346936       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.214:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	E0229 02:16:48.346899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.50.214:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.214:8443: connect: connection refused
	E0229 02:16:48.657995       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0229 02:16:48.663492       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0229 02:16:48.664163       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0229 02:16:48.664673       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:16:48.664689       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0229 02:16:48.664935       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fcb287d507f2d34a1b86dbc306558852868fa12907d50e94fded7224fbc6cae4] <==
	I0229 02:16:52.167690       1 serving.go:380] Generated self-signed cert in-memory
	W0229 02:16:54.773040       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 02:16:54.773158       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W0229 02:16:54.773265       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 02:16:54.773273       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 02:16:54.817290       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0229 02:16:54.817343       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:16:54.821085       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 02:16:54.821133       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:16:54.818797       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 02:16:54.821620       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 02:16:54.922029       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 02:16:50 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:50.856950    2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2ed8bf2e4b6c33c198d309cf83de36ca-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-171039\" (UID: \"2ed8bf2e4b6c33c198d309cf83de36ca\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-171039"
	Feb 29 02:16:50 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:50.857002    2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ed8bf2e4b6c33c198d309cf83de36ca-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-171039\" (UID: \"2ed8bf2e4b6c33c198d309cf83de36ca\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-171039"
	Feb 29 02:16:50 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:50.857026    2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6b5e84b16798ce00bbef6bee241c2ee9-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-171039\" (UID: \"6b5e84b16798ce00bbef6bee241c2ee9\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-171039"
	Feb 29 02:16:50 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:50.857045    2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d1f9dc93a43490d3ca742c0e97d30300-etcd-data\") pod \"etcd-kubernetes-upgrade-171039\" (UID: \"d1f9dc93a43490d3ca742c0e97d30300\") " pod="kube-system/etcd-kubernetes-upgrade-171039"
	Feb 29 02:16:50 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:50.857063    2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ed8bf2e4b6c33c198d309cf83de36ca-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-171039\" (UID: \"2ed8bf2e4b6c33c198d309cf83de36ca\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-171039"
	Feb 29 02:16:50 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:50.857084    2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ed8bf2e4b6c33c198d309cf83de36ca-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-171039\" (UID: \"2ed8bf2e4b6c33c198d309cf83de36ca\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-171039"
	Feb 29 02:16:50 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:50.857113    2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ed8bf2e4b6c33c198d309cf83de36ca-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-171039\" (UID: \"2ed8bf2e4b6c33c198d309cf83de36ca\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-171039"
	Feb 29 02:16:50 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:50.857136    2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d1f9dc93a43490d3ca742c0e97d30300-etcd-certs\") pod \"etcd-kubernetes-upgrade-171039\" (UID: \"d1f9dc93a43490d3ca742c0e97d30300\") " pod="kube-system/etcd-kubernetes-upgrade-171039"
	Feb 29 02:16:50 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:50.857154    2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1adc7ab0e465274b650b81161d8cf486-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-171039\" (UID: \"1adc7ab0e465274b650b81161d8cf486\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-171039"
	Feb 29 02:16:50 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:50.857171    2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1adc7ab0e465274b650b81161d8cf486-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-171039\" (UID: \"1adc7ab0e465274b650b81161d8cf486\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-171039"
	Feb 29 02:16:50 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:50.857191    2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1adc7ab0e465274b650b81161d8cf486-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-171039\" (UID: \"1adc7ab0e465274b650b81161d8cf486\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-171039"
	Feb 29 02:16:50 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:50.864067    2334 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-171039"
	Feb 29 02:16:50 kubernetes-upgrade-171039 kubelet[2334]: E0229 02:16:50.865133    2334 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.214:8443: connect: connection refused" node="kubernetes-upgrade-171039"
	Feb 29 02:16:51 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:51.050179    2334 scope.go:117] "RemoveContainer" containerID="eafa0ad559e6418ab9358d776b8ad48862c85a1e1015d8a38e25a85ca21263e2"
	Feb 29 02:16:51 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:51.051584    2334 scope.go:117] "RemoveContainer" containerID="5cc1ea7262087d31fba747c766bfb891a0f8463a29e0a7af83614529835a6fcc"
	Feb 29 02:16:51 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:51.051948    2334 scope.go:117] "RemoveContainer" containerID="db6fc4565983bb728ec59494275c8c45344cc92bd49e7bda3c18f2c485b9e44e"
	Feb 29 02:16:51 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:51.052717    2334 scope.go:117] "RemoveContainer" containerID="1e7cb7d72fcdb1195389bcb1a916e1b6321c66c35a3f397f3321464a924d4ffd"
	Feb 29 02:16:51 kubernetes-upgrade-171039 kubelet[2334]: E0229 02:16:51.163482    2334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-171039?timeout=10s\": dial tcp 192.168.50.214:8443: connect: connection refused" interval="800ms"
	Feb 29 02:16:51 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:51.266046    2334 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-171039"
	Feb 29 02:16:51 kubernetes-upgrade-171039 kubelet[2334]: E0229 02:16:51.266936    2334 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.214:8443: connect: connection refused" node="kubernetes-upgrade-171039"
	Feb 29 02:16:52 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:52.071045    2334 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-171039"
	Feb 29 02:16:54 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:54.857001    2334 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-171039"
	Feb 29 02:16:54 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:54.857266    2334 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-171039"
	Feb 29 02:16:55 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:55.531476    2334 apiserver.go:52] "Watching apiserver"
	Feb 29 02:16:55 kubernetes-upgrade-171039 kubelet[2334]: I0229 02:16:55.555790    2334 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:16:58.014094  354812 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18063-316644/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-171039 -n kubernetes-upgrade-171039
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-171039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: etcd-kubernetes-upgrade-171039 kube-apiserver-kubernetes-upgrade-171039 storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-171039 describe pod etcd-kubernetes-upgrade-171039 kube-apiserver-kubernetes-upgrade-171039 storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-171039 describe pod etcd-kubernetes-upgrade-171039 kube-apiserver-kubernetes-upgrade-171039 storage-provisioner: exit status 1 (73.780487ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "etcd-kubernetes-upgrade-171039" not found
	Error from server (NotFound): pods "kube-apiserver-kubernetes-upgrade-171039" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-171039 describe pod etcd-kubernetes-upgrade-171039 kube-apiserver-kubernetes-upgrade-171039 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-171039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-171039
--- FAIL: TestKubernetesUpgrade (373.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (293.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-275488 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-275488 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: exit status 109 (4m52.970349562s)

                                                
                                                
-- stdout --
	* [old-k8s-version-275488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node old-k8s-version-275488 in cluster old-k8s-version-275488
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:20:07.906055  363058 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:20:07.906343  363058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:20:07.906357  363058 out.go:304] Setting ErrFile to fd 2...
	I0229 02:20:07.906365  363058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:20:07.906658  363058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:20:07.907393  363058 out.go:298] Setting JSON to false
	I0229 02:20:07.908721  363058 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7351,"bootTime":1709165857,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:20:07.908823  363058 start.go:139] virtualization: kvm guest
	I0229 02:20:07.910817  363058 out.go:177] * [old-k8s-version-275488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:20:07.912605  363058 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:20:07.912674  363058 notify.go:220] Checking for updates...
	I0229 02:20:07.914202  363058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:20:07.915841  363058 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:20:07.917230  363058 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:20:07.918644  363058 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:20:07.919911  363058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:20:07.921525  363058 config.go:182] Loaded profile config "bridge-117441": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:20:07.921700  363058 config.go:182] Loaded profile config "enable-default-cni-117441": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:20:07.921839  363058 config.go:182] Loaded profile config "flannel-117441": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:20:07.921972  363058 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:20:07.966378  363058 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 02:20:07.967780  363058 start.go:299] selected driver: kvm2
	I0229 02:20:07.967801  363058 start.go:903] validating driver "kvm2" against <nil>
	I0229 02:20:07.967817  363058 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:20:07.968957  363058 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:20:07.969055  363058 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:20:07.986467  363058 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:20:07.986523  363058 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 02:20:07.986803  363058 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:20:07.986938  363058 cni.go:84] Creating CNI manager for ""
	I0229 02:20:07.986952  363058 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:20:07.986965  363058 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 02:20:07.986972  363058 start_flags.go:323] config:
	{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:20:07.987158  363058 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:20:07.988992  363058 out.go:177] * Starting control plane node old-k8s-version-275488 in cluster old-k8s-version-275488
	I0229 02:20:07.990348  363058 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:20:07.990394  363058 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 02:20:07.990408  363058 cache.go:56] Caching tarball of preloaded images
	I0229 02:20:07.990496  363058 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 02:20:07.990512  363058 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 02:20:07.990618  363058 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:20:07.990640  363058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json: {Name:mkc0cc21b63f0140c2cc6dc19b218f02752c0283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:20:07.990764  363058 start.go:365] acquiring machines lock for old-k8s-version-275488: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:20:22.017212  363058 start.go:369] acquired machines lock for "old-k8s-version-275488" in 14.026416242s
	I0229 02:20:22.017301  363058 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:20:22.017454  363058 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 02:20:22.019241  363058 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:20:22.019484  363058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:20:22.019525  363058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:20:22.037171  363058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0229 02:20:22.037595  363058 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:20:22.038281  363058 main.go:141] libmachine: Using API Version  1
	I0229 02:20:22.038310  363058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:20:22.038727  363058 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:20:22.038981  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:20:22.039163  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:20:22.039457  363058 start.go:159] libmachine.API.Create for "old-k8s-version-275488" (driver="kvm2")
	I0229 02:20:22.039497  363058 client.go:168] LocalClient.Create starting
	I0229 02:20:22.039533  363058 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem
	I0229 02:20:22.039571  363058 main.go:141] libmachine: Decoding PEM data...
	I0229 02:20:22.039591  363058 main.go:141] libmachine: Parsing certificate...
	I0229 02:20:22.039708  363058 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem
	I0229 02:20:22.039741  363058 main.go:141] libmachine: Decoding PEM data...
	I0229 02:20:22.039760  363058 main.go:141] libmachine: Parsing certificate...
	I0229 02:20:22.039788  363058 main.go:141] libmachine: Running pre-create checks...
	I0229 02:20:22.039801  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .PreCreateCheck
	I0229 02:20:22.040200  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetConfigRaw
	I0229 02:20:22.040667  363058 main.go:141] libmachine: Creating machine...
	I0229 02:20:22.040684  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .Create
	I0229 02:20:22.040848  363058 main.go:141] libmachine: (old-k8s-version-275488) Creating KVM machine...
	I0229 02:20:22.041971  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found existing default KVM network
	I0229 02:20:22.043929  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:22.043764  363213 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002720d0}
	I0229 02:20:22.049227  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | trying to create private KVM network mk-old-k8s-version-275488 192.168.39.0/24...
	I0229 02:20:22.125556  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | private KVM network mk-old-k8s-version-275488 192.168.39.0/24 created
	I0229 02:20:22.125621  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:22.125447  363213 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:20:22.125646  363058 main.go:141] libmachine: (old-k8s-version-275488) Setting up store path in /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488 ...
	I0229 02:20:22.125669  363058 main.go:141] libmachine: (old-k8s-version-275488) Building disk image from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 02:20:22.125692  363058 main.go:141] libmachine: (old-k8s-version-275488) Downloading /home/jenkins/minikube-integration/18063-316644/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:20:22.415955  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:22.415789  363213 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa...
	I0229 02:20:22.827675  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:22.827555  363213 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/old-k8s-version-275488.rawdisk...
	I0229 02:20:22.827714  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Writing magic tar header
	I0229 02:20:22.827737  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Writing SSH key tar header
	I0229 02:20:22.827755  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:22.827708  363213 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488 ...
	I0229 02:20:22.827821  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488
	I0229 02:20:22.827888  363058 main.go:141] libmachine: (old-k8s-version-275488) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488 (perms=drwx------)
	I0229 02:20:22.827913  363058 main.go:141] libmachine: (old-k8s-version-275488) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines (perms=drwxr-xr-x)
	I0229 02:20:22.827929  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines
	I0229 02:20:22.827941  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:20:22.827951  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644
	I0229 02:20:22.827965  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 02:20:22.827975  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Checking permissions on dir: /home/jenkins
	I0229 02:20:22.827989  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Checking permissions on dir: /home
	I0229 02:20:22.827998  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Skipping /home - not owner
	I0229 02:20:22.828011  363058 main.go:141] libmachine: (old-k8s-version-275488) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube (perms=drwxr-xr-x)
	I0229 02:20:22.828031  363058 main.go:141] libmachine: (old-k8s-version-275488) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644 (perms=drwxrwxr-x)
	I0229 02:20:22.828045  363058 main.go:141] libmachine: (old-k8s-version-275488) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 02:20:22.828059  363058 main.go:141] libmachine: (old-k8s-version-275488) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 02:20:22.828071  363058 main.go:141] libmachine: (old-k8s-version-275488) Creating domain...
	I0229 02:20:22.829253  363058 main.go:141] libmachine: (old-k8s-version-275488) define libvirt domain using xml: 
	I0229 02:20:22.829282  363058 main.go:141] libmachine: (old-k8s-version-275488) <domain type='kvm'>
	I0229 02:20:22.829305  363058 main.go:141] libmachine: (old-k8s-version-275488)   <name>old-k8s-version-275488</name>
	I0229 02:20:22.829321  363058 main.go:141] libmachine: (old-k8s-version-275488)   <memory unit='MiB'>2200</memory>
	I0229 02:20:22.829339  363058 main.go:141] libmachine: (old-k8s-version-275488)   <vcpu>2</vcpu>
	I0229 02:20:22.829354  363058 main.go:141] libmachine: (old-k8s-version-275488)   <features>
	I0229 02:20:22.829364  363058 main.go:141] libmachine: (old-k8s-version-275488)     <acpi/>
	I0229 02:20:22.829371  363058 main.go:141] libmachine: (old-k8s-version-275488)     <apic/>
	I0229 02:20:22.829388  363058 main.go:141] libmachine: (old-k8s-version-275488)     <pae/>
	I0229 02:20:22.829402  363058 main.go:141] libmachine: (old-k8s-version-275488)     
	I0229 02:20:22.829411  363058 main.go:141] libmachine: (old-k8s-version-275488)   </features>
	I0229 02:20:22.829428  363058 main.go:141] libmachine: (old-k8s-version-275488)   <cpu mode='host-passthrough'>
	I0229 02:20:22.829436  363058 main.go:141] libmachine: (old-k8s-version-275488)   
	I0229 02:20:22.829443  363058 main.go:141] libmachine: (old-k8s-version-275488)   </cpu>
	I0229 02:20:22.829451  363058 main.go:141] libmachine: (old-k8s-version-275488)   <os>
	I0229 02:20:22.829458  363058 main.go:141] libmachine: (old-k8s-version-275488)     <type>hvm</type>
	I0229 02:20:22.829467  363058 main.go:141] libmachine: (old-k8s-version-275488)     <boot dev='cdrom'/>
	I0229 02:20:22.829474  363058 main.go:141] libmachine: (old-k8s-version-275488)     <boot dev='hd'/>
	I0229 02:20:22.829481  363058 main.go:141] libmachine: (old-k8s-version-275488)     <bootmenu enable='no'/>
	I0229 02:20:22.829487  363058 main.go:141] libmachine: (old-k8s-version-275488)   </os>
	I0229 02:20:22.829494  363058 main.go:141] libmachine: (old-k8s-version-275488)   <devices>
	I0229 02:20:22.829506  363058 main.go:141] libmachine: (old-k8s-version-275488)     <disk type='file' device='cdrom'>
	I0229 02:20:22.829537  363058 main.go:141] libmachine: (old-k8s-version-275488)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/boot2docker.iso'/>
	I0229 02:20:22.829545  363058 main.go:141] libmachine: (old-k8s-version-275488)       <target dev='hdc' bus='scsi'/>
	I0229 02:20:22.829554  363058 main.go:141] libmachine: (old-k8s-version-275488)       <readonly/>
	I0229 02:20:22.829561  363058 main.go:141] libmachine: (old-k8s-version-275488)     </disk>
	I0229 02:20:22.829571  363058 main.go:141] libmachine: (old-k8s-version-275488)     <disk type='file' device='disk'>
	I0229 02:20:22.829585  363058 main.go:141] libmachine: (old-k8s-version-275488)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 02:20:22.829602  363058 main.go:141] libmachine: (old-k8s-version-275488)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/old-k8s-version-275488.rawdisk'/>
	I0229 02:20:22.829610  363058 main.go:141] libmachine: (old-k8s-version-275488)       <target dev='hda' bus='virtio'/>
	I0229 02:20:22.829615  363058 main.go:141] libmachine: (old-k8s-version-275488)     </disk>
	I0229 02:20:22.829620  363058 main.go:141] libmachine: (old-k8s-version-275488)     <interface type='network'>
	I0229 02:20:22.829626  363058 main.go:141] libmachine: (old-k8s-version-275488)       <source network='mk-old-k8s-version-275488'/>
	I0229 02:20:22.829630  363058 main.go:141] libmachine: (old-k8s-version-275488)       <model type='virtio'/>
	I0229 02:20:22.829635  363058 main.go:141] libmachine: (old-k8s-version-275488)     </interface>
	I0229 02:20:22.829640  363058 main.go:141] libmachine: (old-k8s-version-275488)     <interface type='network'>
	I0229 02:20:22.829645  363058 main.go:141] libmachine: (old-k8s-version-275488)       <source network='default'/>
	I0229 02:20:22.829649  363058 main.go:141] libmachine: (old-k8s-version-275488)       <model type='virtio'/>
	I0229 02:20:22.829725  363058 main.go:141] libmachine: (old-k8s-version-275488)     </interface>
	I0229 02:20:22.829749  363058 main.go:141] libmachine: (old-k8s-version-275488)     <serial type='pty'>
	I0229 02:20:22.829761  363058 main.go:141] libmachine: (old-k8s-version-275488)       <target port='0'/>
	I0229 02:20:22.829769  363058 main.go:141] libmachine: (old-k8s-version-275488)     </serial>
	I0229 02:20:22.829777  363058 main.go:141] libmachine: (old-k8s-version-275488)     <console type='pty'>
	I0229 02:20:22.829786  363058 main.go:141] libmachine: (old-k8s-version-275488)       <target type='serial' port='0'/>
	I0229 02:20:22.829794  363058 main.go:141] libmachine: (old-k8s-version-275488)     </console>
	I0229 02:20:22.829801  363058 main.go:141] libmachine: (old-k8s-version-275488)     <rng model='virtio'>
	I0229 02:20:22.829811  363058 main.go:141] libmachine: (old-k8s-version-275488)       <backend model='random'>/dev/random</backend>
	I0229 02:20:22.829822  363058 main.go:141] libmachine: (old-k8s-version-275488)     </rng>
	I0229 02:20:22.829831  363058 main.go:141] libmachine: (old-k8s-version-275488)     
	I0229 02:20:22.829837  363058 main.go:141] libmachine: (old-k8s-version-275488)     
	I0229 02:20:22.829845  363058 main.go:141] libmachine: (old-k8s-version-275488)   </devices>
	I0229 02:20:22.829852  363058 main.go:141] libmachine: (old-k8s-version-275488) </domain>
	I0229 02:20:22.829863  363058 main.go:141] libmachine: (old-k8s-version-275488) 
	I0229 02:20:22.834599  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:c7:39:27 in network default
	I0229 02:20:22.835423  363058 main.go:141] libmachine: (old-k8s-version-275488) Ensuring networks are active...
	I0229 02:20:22.835454  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:22.836315  363058 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network default is active
	I0229 02:20:22.836667  363058 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network mk-old-k8s-version-275488 is active
	I0229 02:20:22.837550  363058 main.go:141] libmachine: (old-k8s-version-275488) Getting domain xml...
	I0229 02:20:22.838671  363058 main.go:141] libmachine: (old-k8s-version-275488) Creating domain...
	I0229 02:20:24.452123  363058 main.go:141] libmachine: (old-k8s-version-275488) Waiting to get IP...
	I0229 02:20:24.453154  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:24.454005  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:24.454040  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:24.453998  363213 retry.go:31] will retry after 198.557883ms: waiting for machine to come up
	I0229 02:20:24.654661  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:24.655621  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:24.655643  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:24.655572  363213 retry.go:31] will retry after 261.325972ms: waiting for machine to come up
	I0229 02:20:24.918586  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:24.919154  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:24.919178  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:24.919089  363213 retry.go:31] will retry after 481.432058ms: waiting for machine to come up
	I0229 02:20:25.401952  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:25.402734  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:25.402767  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:25.402675  363213 retry.go:31] will retry after 408.529073ms: waiting for machine to come up
	I0229 02:20:25.813536  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:25.814083  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:25.814112  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:25.814020  363213 retry.go:31] will retry after 721.315372ms: waiting for machine to come up
	I0229 02:20:26.537580  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:26.538173  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:26.538199  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:26.538077  363213 retry.go:31] will retry after 876.02338ms: waiting for machine to come up
	I0229 02:20:27.415529  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:27.416168  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:27.416195  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:27.416125  363213 retry.go:31] will retry after 980.922336ms: waiting for machine to come up
	I0229 02:20:28.398518  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:28.399062  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:28.399092  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:28.398991  363213 retry.go:31] will retry after 1.082024041s: waiting for machine to come up
	I0229 02:20:29.482747  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:29.483531  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:29.483562  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:29.483480  363213 retry.go:31] will retry after 1.13907019s: waiting for machine to come up
	I0229 02:20:30.624186  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:30.624753  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:30.624780  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:30.624676  363213 retry.go:31] will retry after 2.149650932s: waiting for machine to come up
	I0229 02:20:32.776033  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:32.776562  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:32.776589  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:32.776517  363213 retry.go:31] will retry after 2.325818907s: waiting for machine to come up
	I0229 02:20:35.104606  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:35.105227  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:35.105264  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:35.105174  363213 retry.go:31] will retry after 2.340619097s: waiting for machine to come up
	I0229 02:20:37.447905  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:37.448473  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:37.448501  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:37.448411  363213 retry.go:31] will retry after 4.042567195s: waiting for machine to come up
	I0229 02:20:41.495474  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:41.496087  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:41.496116  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:41.496030  363213 retry.go:31] will retry after 3.786356857s: waiting for machine to come up
	I0229 02:20:45.284896  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:45.285495  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:20:45.285532  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:20:45.285468  363213 retry.go:31] will retry after 6.931043433s: waiting for machine to come up
	I0229 02:20:52.218153  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:52.218928  363058 main.go:141] libmachine: (old-k8s-version-275488) Found IP for machine: 192.168.39.160
	I0229 02:20:52.218983  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has current primary IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:52.218998  363058 main.go:141] libmachine: (old-k8s-version-275488) Reserving static IP address...
	I0229 02:20:52.219337  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"} in network mk-old-k8s-version-275488
	I0229 02:20:52.342073  363058 main.go:141] libmachine: (old-k8s-version-275488) Reserved static IP address: 192.168.39.160
	I0229 02:20:52.342106  363058 main.go:141] libmachine: (old-k8s-version-275488) Waiting for SSH to be available...
	I0229 02:20:52.342117  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Getting to WaitForSSH function...
	I0229 02:20:52.345596  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:52.346193  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:52.346246  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:52.346642  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH client type: external
	I0229 02:20:52.346671  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa (-rw-------)
	I0229 02:20:52.346704  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:20:52.346724  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | About to run SSH command:
	I0229 02:20:52.346771  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | exit 0
	I0229 02:20:52.474829  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | SSH cmd err, output: <nil>: 
	I0229 02:20:52.475149  363058 main.go:141] libmachine: (old-k8s-version-275488) KVM machine creation complete!
	I0229 02:20:52.475484  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetConfigRaw
	I0229 02:20:52.492297  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:20:52.492609  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:20:52.492841  363058 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 02:20:52.492864  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetState
	I0229 02:20:52.494282  363058 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 02:20:52.494299  363058 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 02:20:52.494305  363058 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 02:20:52.494312  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:20:52.497024  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:52.497401  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:52.497433  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:52.497619  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:20:52.497796  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:52.497999  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:52.498181  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:20:52.498512  363058 main.go:141] libmachine: Using SSH client type: native
	I0229 02:20:52.498780  363058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:20:52.498799  363058 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 02:20:52.610508  363058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:20:52.610544  363058 main.go:141] libmachine: Detecting the provisioner...
	I0229 02:20:52.610558  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:20:52.613610  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:52.614734  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:52.614763  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:52.615002  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:20:52.615225  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:52.615423  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:52.615591  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:20:52.615780  363058 main.go:141] libmachine: Using SSH client type: native
	I0229 02:20:52.616017  363058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:20:52.616032  363058 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 02:20:52.727986  363058 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 02:20:52.728103  363058 main.go:141] libmachine: found compatible host: buildroot
	I0229 02:20:52.728121  363058 main.go:141] libmachine: Provisioning with buildroot...
	I0229 02:20:52.728133  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:20:52.728425  363058 buildroot.go:166] provisioning hostname "old-k8s-version-275488"
	I0229 02:20:52.728461  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:20:52.728688  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:20:52.731757  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:52.732122  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:52.732155  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:52.732400  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:20:52.732630  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:52.732784  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:52.732948  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:20:52.733143  363058 main.go:141] libmachine: Using SSH client type: native
	I0229 02:20:52.733365  363058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:20:52.733384  363058 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275488 && echo "old-k8s-version-275488" | sudo tee /etc/hostname
	I0229 02:20:52.873534  363058 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275488
	
	I0229 02:20:52.873569  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:20:52.876827  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:52.877282  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:52.877309  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:52.877473  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:20:52.877702  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:52.877921  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:52.878136  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:20:52.878340  363058 main.go:141] libmachine: Using SSH client type: native
	I0229 02:20:52.878574  363058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:20:52.878599  363058 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275488/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:20:53.010154  363058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:20:53.010187  363058 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:20:53.010215  363058 buildroot.go:174] setting up certificates
	I0229 02:20:53.010246  363058 provision.go:83] configureAuth start
	I0229 02:20:53.010261  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:20:53.010619  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:20:53.013826  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.014332  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:53.014363  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.014557  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:20:53.016766  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.017063  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:53.017105  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.017206  363058 provision.go:138] copyHostCerts
	I0229 02:20:53.017278  363058 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:20:53.017303  363058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:20:53.017413  363058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:20:53.017621  363058 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:20:53.017639  363058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:20:53.017692  363058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:20:53.017853  363058 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:20:53.017869  363058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:20:53.017916  363058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:20:53.018019  363058 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275488 san=[192.168.39.160 192.168.39.160 localhost 127.0.0.1 minikube old-k8s-version-275488]
	I0229 02:20:53.249139  363058 provision.go:172] copyRemoteCerts
	I0229 02:20:53.249197  363058 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:20:53.249224  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:20:53.252409  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.252822  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:53.252870  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.253072  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:20:53.253294  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:53.253489  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:20:53.253719  363058 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:20:53.351449  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:20:53.383424  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 02:20:53.418852  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:20:53.447850  363058 provision.go:86] duration metric: configureAuth took 437.587663ms
	I0229 02:20:53.447886  363058 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:20:53.448119  363058 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:20:53.448223  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:20:53.450925  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.451304  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:53.451339  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.451495  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:20:53.451709  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:53.451892  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:53.452041  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:20:53.452199  363058 main.go:141] libmachine: Using SSH client type: native
	I0229 02:20:53.452409  363058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:20:53.452433  363058 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:20:53.775733  363058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:20:53.775770  363058 main.go:141] libmachine: Checking connection to Docker...
	I0229 02:20:53.775781  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetURL
	I0229 02:20:53.777156  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using libvirt version 6000000
	I0229 02:20:53.779353  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.779694  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:53.779728  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.779928  363058 main.go:141] libmachine: Docker is up and running!
	I0229 02:20:53.779945  363058 main.go:141] libmachine: Reticulating splines...
	I0229 02:20:53.779953  363058 client.go:171] LocalClient.Create took 31.740444271s
	I0229 02:20:53.779981  363058 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-275488" took 31.740525216s
	I0229 02:20:53.779995  363058 start.go:300] post-start starting for "old-k8s-version-275488" (driver="kvm2")
	I0229 02:20:53.780008  363058 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:20:53.780028  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:20:53.780286  363058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:20:53.780314  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:20:53.782569  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.782926  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:53.782963  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.783045  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:20:53.783238  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:53.783407  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:20:53.783553  363058 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:20:53.875337  363058 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:20:53.881906  363058 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:20:53.881936  363058 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:20:53.882010  363058 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:20:53.882131  363058 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:20:53.882300  363058 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:20:53.898364  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:20:53.936047  363058 start.go:303] post-start completed in 156.034847ms
	I0229 02:20:53.936105  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetConfigRaw
	I0229 02:20:53.936788  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:20:53.940551  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.940925  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:53.940956  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.941330  363058 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:20:53.941574  363058 start.go:128] duration metric: createHost completed in 31.92410676s
	I0229 02:20:53.941601  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:20:53.944417  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.944935  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:53.944973  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:53.945129  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:20:53.945328  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:53.945484  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:53.945617  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:20:53.945771  363058 main.go:141] libmachine: Using SSH client type: native
	I0229 02:20:53.945950  363058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:20:53.945961  363058 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 02:20:54.061100  363058 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173254.049821133
	
	I0229 02:20:54.061138  363058 fix.go:206] guest clock: 1709173254.049821133
	I0229 02:20:54.061149  363058 fix.go:219] Guest: 2024-02-29 02:20:54.049821133 +0000 UTC Remote: 2024-02-29 02:20:53.941588876 +0000 UTC m=+46.089794284 (delta=108.232257ms)
	I0229 02:20:54.061190  363058 fix.go:190] guest clock delta is within tolerance: 108.232257ms
	I0229 02:20:54.061200  363058 start.go:83] releasing machines lock for "old-k8s-version-275488", held for 32.043938959s
	I0229 02:20:54.061237  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:20:54.061532  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:20:54.065101  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:54.065591  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:54.065626  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:54.065785  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:20:54.066417  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:20:54.066648  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:20:54.066742  363058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:20:54.066800  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:20:54.066884  363058 ssh_runner.go:195] Run: cat /version.json
	I0229 02:20:54.066910  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:20:54.069747  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:54.070103  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:54.070140  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:54.070154  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:54.070454  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:20:54.070559  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:54.070579  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:54.070608  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:54.070726  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:20:54.070770  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:20:54.070895  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:20:54.070886  363058 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:20:54.071047  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:20:54.071214  363058 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:20:54.184924  363058 ssh_runner.go:195] Run: systemctl --version
	I0229 02:20:54.194946  363058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:20:54.397736  363058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:20:54.405995  363058 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:20:54.406076  363058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:20:54.432005  363058 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:20:54.432037  363058 start.go:475] detecting cgroup driver to use...
	I0229 02:20:54.432117  363058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:20:54.454790  363058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:20:54.474902  363058 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:20:54.474966  363058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:20:54.495660  363058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:20:54.513877  363058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:20:54.722370  363058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:20:54.927820  363058 docker.go:233] disabling docker service ...
	I0229 02:20:54.927879  363058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:20:54.951381  363058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:20:54.974946  363058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:20:55.136835  363058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:20:55.281965  363058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:20:55.304264  363058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:20:55.329017  363058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 02:20:55.329092  363058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:20:55.344850  363058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:20:55.344923  363058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:20:55.360449  363058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:20:55.377570  363058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:20:55.395131  363058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:20:55.412337  363058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:20:55.430364  363058 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:20:55.430444  363058 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:20:55.453268  363058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:20:55.468529  363058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:20:55.655490  363058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:20:55.875535  363058 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:20:55.875622  363058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:20:55.881681  363058 start.go:543] Will wait 60s for crictl version
	I0229 02:20:55.881767  363058 ssh_runner.go:195] Run: which crictl
	I0229 02:20:55.886491  363058 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:20:55.932665  363058 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:20:55.932753  363058 ssh_runner.go:195] Run: crio --version
	I0229 02:20:55.980820  363058 ssh_runner.go:195] Run: crio --version
	I0229 02:20:56.028937  363058 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 02:20:56.030401  363058 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:20:56.033611  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:56.034054  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:20:56.034080  363058 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:20:56.034356  363058 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:20:56.042908  363058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:20:56.064842  363058 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:20:56.064895  363058 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:20:56.126454  363058 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:20:56.126542  363058 ssh_runner.go:195] Run: which lz4
	I0229 02:20:56.132911  363058 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 02:20:56.140563  363058 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:20:56.140604  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 02:20:58.650802  363058 crio.go:444] Took 2.517938 seconds to copy over tarball
	I0229 02:20:58.650911  363058 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:21:01.744985  363058 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.094023729s)
	I0229 02:21:01.745025  363058 crio.go:451] Took 3.094189 seconds to extract the tarball
	I0229 02:21:01.745036  363058 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:21:01.817330  363058 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:21:01.886778  363058 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:21:01.886812  363058 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:21:01.886865  363058 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:21:01.886904  363058 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:21:01.886925  363058 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:21:01.886960  363058 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 02:21:01.886884  363058 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:21:01.887234  363058 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:21:01.887250  363058 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 02:21:01.887309  363058 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:21:01.888687  363058 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:21:01.890016  363058 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 02:21:01.890024  363058 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:21:01.890029  363058 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:21:01.890046  363058 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:21:01.890044  363058 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:21:01.890124  363058 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:21:01.890130  363058 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 02:21:02.014904  363058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 02:21:02.015371  363058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 02:21:02.017873  363058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:21:02.022681  363058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:21:02.028005  363058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 02:21:02.030272  363058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:21:02.050514  363058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:21:02.212805  363058 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 02:21:02.212860  363058 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 02:21:02.212915  363058 ssh_runner.go:195] Run: which crictl
	I0229 02:21:02.212910  363058 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 02:21:02.213014  363058 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 02:21:02.213060  363058 ssh_runner.go:195] Run: which crictl
	I0229 02:21:02.265275  363058 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 02:21:02.265335  363058 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:21:02.265369  363058 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 02:21:02.265396  363058 ssh_runner.go:195] Run: which crictl
	I0229 02:21:02.265410  363058 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:21:02.265457  363058 ssh_runner.go:195] Run: which crictl
	I0229 02:21:02.276698  363058 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 02:21:02.276728  363058 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 02:21:02.276758  363058 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:21:02.276760  363058 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:21:02.276761  363058 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 02:21:02.276809  363058 ssh_runner.go:195] Run: which crictl
	I0229 02:21:02.276815  363058 ssh_runner.go:195] Run: which crictl
	I0229 02:21:02.276829  363058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 02:21:02.276832  363058 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:21:02.276899  363058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 02:21:02.276902  363058 ssh_runner.go:195] Run: which crictl
	I0229 02:21:02.276940  363058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:21:02.276973  363058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:21:02.370070  363058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 02:21:02.370157  363058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:21:02.370179  363058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 02:21:02.404985  363058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 02:21:02.404985  363058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 02:21:02.405041  363058 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:21:02.405071  363058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 02:21:02.465904  363058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 02:21:02.466041  363058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 02:21:02.490534  363058 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 02:21:02.871562  363058 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:21:03.026520  363058 cache_images.go:92] LoadImages completed in 1.139689307s
	W0229 02:21:03.026605  363058 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0229 02:21:03.026681  363058 ssh_runner.go:195] Run: crio config
	I0229 02:21:03.106348  363058 cni.go:84] Creating CNI manager for ""
	I0229 02:21:03.106375  363058 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:21:03.106397  363058 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:21:03.106423  363058 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275488 NodeName:old-k8s-version-275488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 02:21:03.106627  363058 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-275488
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.160:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:21:03.106747  363058 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275488 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:21:03.106811  363058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 02:21:03.123576  363058 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:21:03.123659  363058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:21:03.136318  363058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 02:21:03.163917  363058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:21:03.188542  363058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 02:21:03.221230  363058 ssh_runner.go:195] Run: grep 192.168.39.160	control-plane.minikube.internal$ /etc/hosts
	I0229 02:21:03.227480  363058 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:21:03.247905  363058 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488 for IP: 192.168.39.160
	I0229 02:21:03.247939  363058 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:21:03.248102  363058 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:21:03.248165  363058 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:21:03.248226  363058 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/client.key
	I0229 02:21:03.248245  363058 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/client.crt with IP's: []
	I0229 02:21:03.336172  363058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/client.crt ...
	I0229 02:21:03.336210  363058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/client.crt: {Name:mk5b1d658acc3c772765fbc38ed9bac1511e524d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:21:03.336401  363058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/client.key ...
	I0229 02:21:03.336425  363058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/client.key: {Name:mkb6c9dddd55ab16332314555be0d57591420912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:21:03.336502  363058 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key.80b25619
	I0229 02:21:03.336518  363058 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.crt.80b25619 with IP's: [192.168.39.160 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:21:03.573871  363058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.crt.80b25619 ...
	I0229 02:21:03.573897  363058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.crt.80b25619: {Name:mk401ff0871f9cb3f938920afd1c458d74e459b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:21:03.574086  363058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key.80b25619 ...
	I0229 02:21:03.574106  363058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key.80b25619: {Name:mk790915b3cd4d74e4db0b89cb37e186bd28a0ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:21:03.574199  363058 certs.go:337] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.crt.80b25619 -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.crt
	I0229 02:21:03.574345  363058 certs.go:341] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key.80b25619 -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key
	I0229 02:21:03.574430  363058 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key
	I0229 02:21:03.574450  363058 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.crt with IP's: []
	I0229 02:21:03.751835  363058 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.crt ...
	I0229 02:21:03.751863  363058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.crt: {Name:mk4abe644dee28f4a63715df6cded861078d14d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:21:03.752009  363058 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key ...
	I0229 02:21:03.752022  363058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key: {Name:mk0f7451576196f2010f77fb0ff6276984dc4ba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:21:03.752188  363058 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:21:03.752225  363058 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:21:03.752237  363058 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:21:03.752259  363058 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:21:03.752284  363058 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:21:03.752304  363058 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:21:03.752342  363058 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:21:03.753022  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:21:03.788697  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:21:03.817368  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:21:03.847566  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:21:03.876548  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:21:03.909475  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:21:03.944466  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:21:03.975089  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:21:04.009058  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:21:04.039669  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:21:04.067948  363058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:21:04.096813  363058 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:21:04.115887  363058 ssh_runner.go:195] Run: openssl version
	I0229 02:21:04.122443  363058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:21:04.135235  363058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:21:04.140445  363058 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:21:04.140525  363058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:21:04.147472  363058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:21:04.162392  363058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:21:04.175438  363058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:21:04.180602  363058 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:21:04.180664  363058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:21:04.187655  363058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:21:04.201133  363058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:21:04.217716  363058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:21:04.225309  363058 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:21:04.225396  363058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:21:04.232154  363058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:21:04.252273  363058 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:21:04.258754  363058 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:21:04.258806  363058 kubeadm.go:404] StartCluster: {Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:21:04.258889  363058 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:21:04.258953  363058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:21:04.306971  363058 cri.go:89] found id: ""
	I0229 02:21:04.307049  363058 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:21:04.320705  363058 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:21:04.333620  363058 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:21:04.346105  363058 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:21:04.346148  363058 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:21:04.814110  363058 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:23:03.367227  363058 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:23:03.367347  363058 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:23:03.368996  363058 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:23:03.369076  363058 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:23:03.369188  363058 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:23:03.369319  363058 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:23:03.369445  363058 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:23:03.369573  363058 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:23:03.369676  363058 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:23:03.369744  363058 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:23:03.369818  363058 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:23:03.371478  363058 out.go:204]   - Generating certificates and keys ...
	I0229 02:23:03.371589  363058 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:23:03.371674  363058 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:23:03.371758  363058 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:23:03.371834  363058 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:23:03.371904  363058 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 02:23:03.371956  363058 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 02:23:03.372016  363058 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 02:23:03.372161  363058 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-275488 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	I0229 02:23:03.372218  363058 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 02:23:03.372355  363058 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-275488 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	I0229 02:23:03.372426  363058 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:23:03.372496  363058 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:23:03.372546  363058 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 02:23:03.372609  363058 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:23:03.372666  363058 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:23:03.372723  363058 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:23:03.372802  363058 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:23:03.372863  363058 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:23:03.372936  363058 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:23:03.375213  363058 out.go:204]   - Booting up control plane ...
	I0229 02:23:03.375319  363058 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:23:03.375401  363058 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:23:03.375479  363058 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:23:03.375574  363058 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:23:03.375751  363058 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:23:03.375817  363058 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:23:03.375896  363058 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:23:03.376097  363058 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:23:03.376175  363058 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:23:03.376399  363058 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:23:03.376476  363058 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:23:03.376681  363058 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:23:03.376763  363058 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:23:03.376964  363058 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:23:03.377042  363058 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:23:03.377261  363058 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:23:03.377269  363058 kubeadm.go:322] 
	I0229 02:23:03.377314  363058 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:23:03.377354  363058 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:23:03.377360  363058 kubeadm.go:322] 
	I0229 02:23:03.377399  363058 kubeadm.go:322] This error is likely caused by:
	I0229 02:23:03.377435  363058 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:23:03.377553  363058 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:23:03.377561  363058 kubeadm.go:322] 
	I0229 02:23:03.377682  363058 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:23:03.377723  363058 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:23:03.377836  363058 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:23:03.377858  363058 kubeadm.go:322] 
	I0229 02:23:03.378000  363058 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:23:03.378116  363058 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:23:03.378212  363058 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:23:03.378551  363058 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:23:03.378669  363058 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:23:03.378733  363058 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 02:23:03.378882  363058 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-275488 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-275488 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-275488 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-275488 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:23:03.378944  363058 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:23:03.897763  363058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:23:03.915368  363058 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:23:03.927732  363058 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:23:03.927787  363058 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:23:03.997384  363058 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:23:03.997475  363058 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:23:04.129962  363058 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:23:04.130102  363058 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:23:04.130203  363058 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:23:04.352134  363058 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:23:04.352293  363058 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:23:04.360556  363058 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:23:04.520146  363058 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:23:04.522694  363058 out.go:204]   - Generating certificates and keys ...
	I0229 02:23:04.522802  363058 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:23:04.522877  363058 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:23:04.522977  363058 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:23:04.523069  363058 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:23:04.523164  363058 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:23:04.523259  363058 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:23:04.523361  363058 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:23:04.523457  363058 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:23:04.523560  363058 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:23:04.523661  363058 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:23:04.523721  363058 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:23:04.523816  363058 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:23:04.623824  363058 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:23:04.886857  363058 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:23:05.064692  363058 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:23:05.176661  363058 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:23:05.177622  363058 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:23:05.179306  363058 out.go:204]   - Booting up control plane ...
	I0229 02:23:05.179468  363058 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:23:05.187620  363058 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:23:05.188716  363058 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:23:05.189514  363058 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:23:05.192796  363058 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:23:45.194329  363058 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:23:45.195017  363058 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:23:45.195214  363058 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:23:50.195795  363058 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:23:50.196054  363058 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:24:00.196805  363058 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:24:00.197056  363058 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:24:20.197949  363058 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:24:20.198134  363058 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:25:00.198097  363058 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:25:00.198382  363058 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:25:00.198408  363058 kubeadm.go:322] 
	I0229 02:25:00.198465  363058 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:25:00.198533  363058 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:25:00.198547  363058 kubeadm.go:322] 
	I0229 02:25:00.198589  363058 kubeadm.go:322] This error is likely caused by:
	I0229 02:25:00.198647  363058 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:25:00.198804  363058 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:25:00.198815  363058 kubeadm.go:322] 
	I0229 02:25:00.198947  363058 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:25:00.198997  363058 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:25:00.199036  363058 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:25:00.199046  363058 kubeadm.go:322] 
	I0229 02:25:00.199195  363058 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:25:00.199312  363058 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:25:00.199454  363058 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:25:00.199498  363058 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:25:00.199630  363058 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:25:00.199709  363058 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:25:00.199822  363058 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:25:00.199893  363058 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:25:00.200024  363058 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:25:00.200079  363058 kubeadm.go:406] StartCluster complete in 3m55.941279011s
	I0229 02:25:00.200133  363058 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:25:00.200206  363058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:25:00.246693  363058 cri.go:89] found id: ""
	I0229 02:25:00.246725  363058 logs.go:276] 0 containers: []
	W0229 02:25:00.246737  363058 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:25:00.246745  363058 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:25:00.246798  363058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:25:00.282671  363058 cri.go:89] found id: ""
	I0229 02:25:00.282696  363058 logs.go:276] 0 containers: []
	W0229 02:25:00.282705  363058 logs.go:278] No container was found matching "etcd"
	I0229 02:25:00.282711  363058 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:25:00.282762  363058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:25:00.318664  363058 cri.go:89] found id: ""
	I0229 02:25:00.318690  363058 logs.go:276] 0 containers: []
	W0229 02:25:00.318698  363058 logs.go:278] No container was found matching "coredns"
	I0229 02:25:00.318705  363058 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:25:00.318758  363058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:25:00.353816  363058 cri.go:89] found id: ""
	I0229 02:25:00.353847  363058 logs.go:276] 0 containers: []
	W0229 02:25:00.353861  363058 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:25:00.353868  363058 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:25:00.353941  363058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:25:00.388881  363058 cri.go:89] found id: ""
	I0229 02:25:00.388906  363058 logs.go:276] 0 containers: []
	W0229 02:25:00.388914  363058 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:25:00.388920  363058 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:25:00.388979  363058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:25:00.441900  363058 cri.go:89] found id: ""
	I0229 02:25:00.441933  363058 logs.go:276] 0 containers: []
	W0229 02:25:00.441942  363058 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:25:00.441948  363058 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:25:00.442011  363058 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:25:00.477401  363058 cri.go:89] found id: ""
	I0229 02:25:00.477427  363058 logs.go:276] 0 containers: []
	W0229 02:25:00.477436  363058 logs.go:278] No container was found matching "kindnet"
	I0229 02:25:00.477447  363058 logs.go:123] Gathering logs for kubelet ...
	I0229 02:25:00.477460  363058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:25:00.523332  363058 logs.go:123] Gathering logs for dmesg ...
	I0229 02:25:00.523368  363058 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:25:00.540671  363058 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:25:00.540699  363058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:25:00.671889  363058 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:25:00.671914  363058 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:25:00.671928  363058 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:25:00.763824  363058 logs.go:123] Gathering logs for container status ...
	I0229 02:25:00.763864  363058 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 02:25:00.805712  363058 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:25:00.805771  363058 out.go:239] * 
	* 
	W0229 02:25:00.805838  363058 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:25:00.805868  363058 out.go:239] * 
	* 
	W0229 02:25:00.806812  363058 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:25:00.809590  363058 out.go:177] 
	W0229 02:25:00.810582  363058 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:25:00.810633  363058 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:25:00.810658  363058 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:25:00.812128  363058 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-275488 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488: exit status 6 (235.399287ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:25:01.084045  369110 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-275488" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-275488" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (293.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-915633 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-915633 --alsologtostderr -v=3: exit status 82 (2m0.310032086s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-915633"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:23:03.433869  368648 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:23:03.434033  368648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:23:03.434045  368648 out.go:304] Setting ErrFile to fd 2...
	I0229 02:23:03.434052  368648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:23:03.436731  368648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:23:03.437133  368648 out.go:298] Setting JSON to false
	I0229 02:23:03.437252  368648 mustload.go:65] Loading cluster: embed-certs-915633
	I0229 02:23:03.438555  368648 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:23:03.438687  368648 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/config.json ...
	I0229 02:23:03.438885  368648 mustload.go:65] Loading cluster: embed-certs-915633
	I0229 02:23:03.439040  368648 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:23:03.439079  368648 stop.go:39] StopHost: embed-certs-915633
	I0229 02:23:03.440018  368648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:23:03.440114  368648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:23:03.460430  368648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44385
	I0229 02:23:03.461483  368648 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:23:03.462759  368648 main.go:141] libmachine: Using API Version  1
	I0229 02:23:03.462871  368648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:23:03.463579  368648 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:23:03.469583  368648 out.go:177] * Stopping node "embed-certs-915633"  ...
	I0229 02:23:03.471175  368648 main.go:141] libmachine: Stopping "embed-certs-915633"...
	I0229 02:23:03.471214  368648 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:23:03.473725  368648 main.go:141] libmachine: (embed-certs-915633) Calling .Stop
	I0229 02:23:03.478573  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 0/120
	I0229 02:23:04.479788  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 1/120
	I0229 02:23:05.481413  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 2/120
	I0229 02:23:06.482659  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 3/120
	I0229 02:23:07.484755  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 4/120
	I0229 02:23:08.486806  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 5/120
	I0229 02:23:09.488198  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 6/120
	I0229 02:23:10.489600  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 7/120
	I0229 02:23:11.491060  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 8/120
	I0229 02:23:12.492852  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 9/120
	I0229 02:23:13.494607  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 10/120
	I0229 02:23:14.496716  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 11/120
	I0229 02:23:15.498067  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 12/120
	I0229 02:23:16.499440  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 13/120
	I0229 02:23:17.501003  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 14/120
	I0229 02:23:18.502833  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 15/120
	I0229 02:23:19.505058  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 16/120
	I0229 02:23:20.506532  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 17/120
	I0229 02:23:21.507964  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 18/120
	I0229 02:23:22.509186  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 19/120
	I0229 02:23:23.511101  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 20/120
	I0229 02:23:24.512480  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 21/120
	I0229 02:23:25.513949  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 22/120
	I0229 02:23:26.515269  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 23/120
	I0229 02:23:27.516564  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 24/120
	I0229 02:23:28.518193  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 25/120
	I0229 02:23:29.519661  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 26/120
	I0229 02:23:30.521956  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 27/120
	I0229 02:23:31.523165  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 28/120
	I0229 02:23:32.524848  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 29/120
	I0229 02:23:33.526946  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 30/120
	I0229 02:23:34.528273  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 31/120
	I0229 02:23:35.529606  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 32/120
	I0229 02:23:36.531047  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 33/120
	I0229 02:23:37.532671  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 34/120
	I0229 02:23:38.534453  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 35/120
	I0229 02:23:39.536673  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 36/120
	I0229 02:23:40.538138  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 37/120
	I0229 02:23:41.539584  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 38/120
	I0229 02:23:42.540923  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 39/120
	I0229 02:23:43.542538  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 40/120
	I0229 02:23:44.544224  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 41/120
	I0229 02:23:45.545654  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 42/120
	I0229 02:23:46.547214  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 43/120
	I0229 02:23:47.548786  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 44/120
	I0229 02:23:48.550940  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 45/120
	I0229 02:23:49.552746  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 46/120
	I0229 02:23:50.554216  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 47/120
	I0229 02:23:51.555683  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 48/120
	I0229 02:23:52.556917  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 49/120
	I0229 02:23:53.558816  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 50/120
	I0229 02:23:54.560286  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 51/120
	I0229 02:23:55.561515  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 52/120
	I0229 02:23:56.562967  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 53/120
	I0229 02:23:57.564209  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 54/120
	I0229 02:23:58.566309  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 55/120
	I0229 02:23:59.567524  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 56/120
	I0229 02:24:00.568937  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 57/120
	I0229 02:24:01.570310  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 58/120
	I0229 02:24:02.571797  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 59/120
	I0229 02:24:03.574057  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 60/120
	I0229 02:24:04.575305  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 61/120
	I0229 02:24:05.576678  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 62/120
	I0229 02:24:06.578012  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 63/120
	I0229 02:24:07.579215  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 64/120
	I0229 02:24:08.581089  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 65/120
	I0229 02:24:09.582264  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 66/120
	I0229 02:24:10.583496  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 67/120
	I0229 02:24:11.584729  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 68/120
	I0229 02:24:12.585943  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 69/120
	I0229 02:24:13.587911  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 70/120
	I0229 02:24:14.589283  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 71/120
	I0229 02:24:15.590812  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 72/120
	I0229 02:24:16.592238  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 73/120
	I0229 02:24:17.593694  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 74/120
	I0229 02:24:18.595686  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 75/120
	I0229 02:24:19.597003  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 76/120
	I0229 02:24:20.598381  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 77/120
	I0229 02:24:21.599805  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 78/120
	I0229 02:24:22.601120  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 79/120
	I0229 02:24:23.603233  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 80/120
	I0229 02:24:24.605678  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 81/120
	I0229 02:24:25.607001  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 82/120
	I0229 02:24:26.608695  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 83/120
	I0229 02:24:27.610155  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 84/120
	I0229 02:24:28.611823  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 85/120
	I0229 02:24:29.613228  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 86/120
	I0229 02:24:30.614824  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 87/120
	I0229 02:24:31.616162  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 88/120
	I0229 02:24:32.617455  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 89/120
	I0229 02:24:33.619664  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 90/120
	I0229 02:24:34.621140  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 91/120
	I0229 02:24:35.622540  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 92/120
	I0229 02:24:36.624215  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 93/120
	I0229 02:24:37.625416  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 94/120
	I0229 02:24:38.627327  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 95/120
	I0229 02:24:39.628692  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 96/120
	I0229 02:24:40.630074  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 97/120
	I0229 02:24:41.631218  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 98/120
	I0229 02:24:42.632554  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 99/120
	I0229 02:24:43.634631  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 100/120
	I0229 02:24:44.636034  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 101/120
	I0229 02:24:45.637167  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 102/120
	I0229 02:24:46.638479  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 103/120
	I0229 02:24:47.640494  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 104/120
	I0229 02:24:48.642385  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 105/120
	I0229 02:24:49.643616  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 106/120
	I0229 02:24:50.644713  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 107/120
	I0229 02:24:51.646195  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 108/120
	I0229 02:24:52.647347  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 109/120
	I0229 02:24:53.649356  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 110/120
	I0229 02:24:54.650615  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 111/120
	I0229 02:24:55.651806  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 112/120
	I0229 02:24:56.653086  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 113/120
	I0229 02:24:57.654512  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 114/120
	I0229 02:24:58.656119  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 115/120
	I0229 02:24:59.657429  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 116/120
	I0229 02:25:00.658695  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 117/120
	I0229 02:25:01.660928  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 118/120
	I0229 02:25:02.662325  368648 main.go:141] libmachine: (embed-certs-915633) Waiting for machine to stop 119/120
	I0229 02:25:03.662803  368648 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0229 02:25:03.662877  368648 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0229 02:25:03.664740  368648 out.go:177] 
	W0229 02:25:03.665964  368648 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0229 02:25:03.665981  368648 out.go:239] * 
	* 
	W0229 02:25:03.669232  368648 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:25:03.670603  368648 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-915633 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-915633 -n embed-certs-915633
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-915633 -n embed-certs-915633: exit status 3 (18.586444931s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:25:22.258607  369254 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host
	E0229 02:25:22.258628  369254 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-915633" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-247751 --alsologtostderr -v=3
E0229 02:23:10.388327  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:23:10.393609  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:23:10.403897  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:23:10.424245  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:23:10.464591  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:23:10.544848  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:23:10.705263  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:23:11.025854  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:23:11.666660  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:23:12.947468  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:23:15.508414  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:23:20.629139  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:23:25.986324  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
E0229 02:23:30.870052  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-247751 --alsologtostderr -v=3: exit status 82 (2m0.269009898s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-247751"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:23:07.487483  368713 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:23:07.487605  368713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:23:07.487615  368713 out.go:304] Setting ErrFile to fd 2...
	I0229 02:23:07.487619  368713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:23:07.487820  368713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:23:07.488155  368713 out.go:298] Setting JSON to false
	I0229 02:23:07.488256  368713 mustload.go:65] Loading cluster: no-preload-247751
	I0229 02:23:07.488630  368713 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:23:07.488717  368713 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/config.json ...
	I0229 02:23:07.488913  368713 mustload.go:65] Loading cluster: no-preload-247751
	I0229 02:23:07.489056  368713 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:23:07.489102  368713 stop.go:39] StopHost: no-preload-247751
	I0229 02:23:07.489671  368713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:23:07.489732  368713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:23:07.505435  368713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42247
	I0229 02:23:07.505939  368713 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:23:07.506613  368713 main.go:141] libmachine: Using API Version  1
	I0229 02:23:07.506644  368713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:23:07.507055  368713 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:23:07.509521  368713 out.go:177] * Stopping node "no-preload-247751"  ...
	I0229 02:23:07.510864  368713 main.go:141] libmachine: Stopping "no-preload-247751"...
	I0229 02:23:07.510900  368713 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:23:07.512651  368713 main.go:141] libmachine: (no-preload-247751) Calling .Stop
	I0229 02:23:07.516079  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 0/120
	I0229 02:23:08.517382  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 1/120
	I0229 02:23:09.519180  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 2/120
	I0229 02:23:10.520582  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 3/120
	I0229 02:23:11.522189  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 4/120
	I0229 02:23:12.524095  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 5/120
	I0229 02:23:13.525824  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 6/120
	I0229 02:23:14.527098  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 7/120
	I0229 02:23:15.528836  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 8/120
	I0229 02:23:16.530137  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 9/120
	I0229 02:23:17.532090  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 10/120
	I0229 02:23:18.533284  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 11/120
	I0229 02:23:19.534395  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 12/120
	I0229 02:23:20.536776  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 13/120
	I0229 02:23:21.538066  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 14/120
	I0229 02:23:22.539976  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 15/120
	I0229 02:23:23.541249  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 16/120
	I0229 02:23:24.542397  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 17/120
	I0229 02:23:25.543611  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 18/120
	I0229 02:23:26.545360  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 19/120
	I0229 02:23:27.547325  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 20/120
	I0229 02:23:28.548806  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 21/120
	I0229 02:23:29.550207  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 22/120
	I0229 02:23:30.551447  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 23/120
	I0229 02:23:31.552632  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 24/120
	I0229 02:23:32.553844  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 25/120
	I0229 02:23:33.554993  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 26/120
	I0229 02:23:34.556659  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 27/120
	I0229 02:23:35.557713  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 28/120
	I0229 02:23:36.559060  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 29/120
	I0229 02:23:37.560532  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 30/120
	I0229 02:23:38.561728  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 31/120
	I0229 02:23:39.563552  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 32/120
	I0229 02:23:40.564995  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 33/120
	I0229 02:23:41.566673  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 34/120
	I0229 02:23:42.568494  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 35/120
	I0229 02:23:43.569616  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 36/120
	I0229 02:23:44.570859  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 37/120
	I0229 02:23:45.572098  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 38/120
	I0229 02:23:46.573409  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 39/120
	I0229 02:23:47.575633  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 40/120
	I0229 02:23:48.576972  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 41/120
	I0229 02:23:49.578289  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 42/120
	I0229 02:23:50.579566  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 43/120
	I0229 02:23:51.581149  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 44/120
	I0229 02:23:52.582969  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 45/120
	I0229 02:23:53.584691  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 46/120
	I0229 02:23:54.587011  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 47/120
	I0229 02:23:55.588219  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 48/120
	I0229 02:23:56.589498  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 49/120
	I0229 02:23:57.591409  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 50/120
	I0229 02:23:58.592490  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 51/120
	I0229 02:23:59.593784  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 52/120
	I0229 02:24:00.594845  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 53/120
	I0229 02:24:01.596779  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 54/120
	I0229 02:24:02.598628  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 55/120
	I0229 02:24:03.599730  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 56/120
	I0229 02:24:04.600959  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 57/120
	I0229 02:24:05.602315  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 58/120
	I0229 02:24:06.603578  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 59/120
	I0229 02:24:07.605402  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 60/120
	I0229 02:24:08.606574  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 61/120
	I0229 02:24:09.607981  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 62/120
	I0229 02:24:10.609019  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 63/120
	I0229 02:24:11.610522  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 64/120
	I0229 02:24:12.612507  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 65/120
	I0229 02:24:13.613647  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 66/120
	I0229 02:24:14.614836  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 67/120
	I0229 02:24:15.617046  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 68/120
	I0229 02:24:16.618288  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 69/120
	I0229 02:24:17.620181  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 70/120
	I0229 02:24:18.621624  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 71/120
	I0229 02:24:19.623048  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 72/120
	I0229 02:24:20.624402  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 73/120
	I0229 02:24:21.625750  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 74/120
	I0229 02:24:22.627682  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 75/120
	I0229 02:24:23.628672  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 76/120
	I0229 02:24:24.629846  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 77/120
	I0229 02:24:25.631122  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 78/120
	I0229 02:24:26.632518  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 79/120
	I0229 02:24:27.634522  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 80/120
	I0229 02:24:28.635681  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 81/120
	I0229 02:24:29.636711  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 82/120
	I0229 02:24:30.638052  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 83/120
	I0229 02:24:31.639126  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 84/120
	I0229 02:24:32.640985  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 85/120
	I0229 02:24:33.642576  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 86/120
	I0229 02:24:34.644400  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 87/120
	I0229 02:24:35.645619  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 88/120
	I0229 02:24:36.646874  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 89/120
	I0229 02:24:37.648187  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 90/120
	I0229 02:24:38.649627  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 91/120
	I0229 02:24:39.650845  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 92/120
	I0229 02:24:40.651959  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 93/120
	I0229 02:24:41.652985  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 94/120
	I0229 02:24:42.654946  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 95/120
	I0229 02:24:43.656128  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 96/120
	I0229 02:24:44.657901  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 97/120
	I0229 02:24:45.659624  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 98/120
	I0229 02:24:46.660679  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 99/120
	I0229 02:24:47.661756  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 100/120
	I0229 02:24:48.663216  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 101/120
	I0229 02:24:49.664559  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 102/120
	I0229 02:24:50.665752  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 103/120
	I0229 02:24:51.667122  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 104/120
	I0229 02:24:52.668945  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 105/120
	I0229 02:24:53.670048  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 106/120
	I0229 02:24:54.671218  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 107/120
	I0229 02:24:55.672465  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 108/120
	I0229 02:24:56.673694  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 109/120
	I0229 02:24:57.675920  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 110/120
	I0229 02:24:58.677271  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 111/120
	I0229 02:24:59.678696  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 112/120
	I0229 02:25:00.680654  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 113/120
	I0229 02:25:01.681949  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 114/120
	I0229 02:25:02.683629  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 115/120
	I0229 02:25:03.684895  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 116/120
	I0229 02:25:04.686145  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 117/120
	I0229 02:25:05.687417  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 118/120
	I0229 02:25:06.688763  368713 main.go:141] libmachine: (no-preload-247751) Waiting for machine to stop 119/120
	I0229 02:25:07.690172  368713 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0229 02:25:07.690270  368713 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0229 02:25:07.692024  368713 out.go:177] 
	W0229 02:25:07.693236  368713 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0229 02:25:07.693252  368713 out.go:239] * 
	* 
	W0229 02:25:07.696073  368713 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:25:07.697344  368713 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-247751 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247751 -n no-preload-247751
E0229 02:25:18.784266  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:25:21.894954  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:25:21.900204  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:25:21.910513  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:25:21.930760  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:25:21.971058  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:25:22.051441  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:25:22.211909  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247751 -n no-preload-247751: exit status 3 (18.655641527s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:25:26.354634  369284 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host
	E0229 02:25:26.354656  369284 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-247751" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-071485 --alsologtostderr -v=3
E0229 02:24:09.040433  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 02:24:18.683965  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:24:18.689311  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:24:18.699559  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:24:18.719918  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:24:18.760240  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:24:18.840637  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:24:19.001050  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:24:19.321713  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:24:19.962650  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:24:21.243247  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:24:23.804148  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:24:28.924325  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:24:32.311349  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:24:37.821723  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:24:37.824802  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 02:24:37.826947  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:24:37.837191  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:24:37.857451  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:24:37.897750  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:24:37.978092  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:24:38.138514  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:24:38.459624  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:24:39.100639  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:24:39.164935  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:24:40.380798  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:24:42.941544  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:24:47.906901  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
E0229 02:24:48.062203  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:24:58.303329  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:24:59.645158  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-071485 --alsologtostderr -v=3: exit status 82 (2m0.280775777s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-071485"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:23:54.275060  368937 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:23:54.275206  368937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:23:54.275221  368937 out.go:304] Setting ErrFile to fd 2...
	I0229 02:23:54.275228  368937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:23:54.275466  368937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:23:54.275784  368937 out.go:298] Setting JSON to false
	I0229 02:23:54.275878  368937 mustload.go:65] Loading cluster: default-k8s-diff-port-071485
	I0229 02:23:54.276240  368937 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:23:54.276314  368937 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/config.json ...
	I0229 02:23:54.276479  368937 mustload.go:65] Loading cluster: default-k8s-diff-port-071485
	I0229 02:23:54.276578  368937 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:23:54.276602  368937 stop.go:39] StopHost: default-k8s-diff-port-071485
	I0229 02:23:54.277062  368937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:23:54.277115  368937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:23:54.291925  368937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33231
	I0229 02:23:54.292364  368937 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:23:54.292931  368937 main.go:141] libmachine: Using API Version  1
	I0229 02:23:54.292957  368937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:23:54.293292  368937 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:23:54.295546  368937 out.go:177] * Stopping node "default-k8s-diff-port-071485"  ...
	I0229 02:23:54.297264  368937 main.go:141] libmachine: Stopping "default-k8s-diff-port-071485"...
	I0229 02:23:54.297294  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:23:54.298785  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Stop
	I0229 02:23:54.302099  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 0/120
	I0229 02:23:55.303405  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 1/120
	I0229 02:23:56.304774  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 2/120
	I0229 02:23:57.306816  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 3/120
	I0229 02:23:58.308168  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 4/120
	I0229 02:23:59.310259  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 5/120
	I0229 02:24:00.311749  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 6/120
	I0229 02:24:01.313105  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 7/120
	I0229 02:24:02.314533  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 8/120
	I0229 02:24:03.315866  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 9/120
	I0229 02:24:04.317935  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 10/120
	I0229 02:24:05.319396  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 11/120
	I0229 02:24:06.320703  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 12/120
	I0229 02:24:07.321978  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 13/120
	I0229 02:24:08.323201  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 14/120
	I0229 02:24:09.325140  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 15/120
	I0229 02:24:10.326536  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 16/120
	I0229 02:24:11.328685  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 17/120
	I0229 02:24:12.329940  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 18/120
	I0229 02:24:13.331328  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 19/120
	I0229 02:24:14.333442  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 20/120
	I0229 02:24:15.334977  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 21/120
	I0229 02:24:16.336322  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 22/120
	I0229 02:24:17.337857  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 23/120
	I0229 02:24:18.339192  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 24/120
	I0229 02:24:19.340995  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 25/120
	I0229 02:24:20.342506  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 26/120
	I0229 02:24:21.343907  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 27/120
	I0229 02:24:22.345342  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 28/120
	I0229 02:24:23.346748  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 29/120
	I0229 02:24:24.348937  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 30/120
	I0229 02:24:25.350325  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 31/120
	I0229 02:24:26.351810  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 32/120
	I0229 02:24:27.353482  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 33/120
	I0229 02:24:28.355142  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 34/120
	I0229 02:24:29.357364  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 35/120
	I0229 02:24:30.358621  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 36/120
	I0229 02:24:31.360397  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 37/120
	I0229 02:24:32.361750  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 38/120
	I0229 02:24:33.363057  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 39/120
	I0229 02:24:34.365351  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 40/120
	I0229 02:24:35.366577  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 41/120
	I0229 02:24:36.367775  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 42/120
	I0229 02:24:37.369126  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 43/120
	I0229 02:24:38.370264  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 44/120
	I0229 02:24:39.371982  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 45/120
	I0229 02:24:40.373564  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 46/120
	I0229 02:24:41.374888  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 47/120
	I0229 02:24:42.376651  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 48/120
	I0229 02:24:43.377830  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 49/120
	I0229 02:24:44.379948  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 50/120
	I0229 02:24:45.381147  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 51/120
	I0229 02:24:46.383210  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 52/120
	I0229 02:24:47.384475  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 53/120
	I0229 02:24:48.385940  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 54/120
	I0229 02:24:49.387958  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 55/120
	I0229 02:24:50.389671  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 56/120
	I0229 02:24:51.390897  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 57/120
	I0229 02:24:52.392735  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 58/120
	I0229 02:24:53.394033  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 59/120
	I0229 02:24:54.396046  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 60/120
	I0229 02:24:55.397473  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 61/120
	I0229 02:24:56.399145  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 62/120
	I0229 02:24:57.400503  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 63/120
	I0229 02:24:58.401873  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 64/120
	I0229 02:24:59.403674  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 65/120
	I0229 02:25:00.405360  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 66/120
	I0229 02:25:01.406670  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 67/120
	I0229 02:25:02.408036  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 68/120
	I0229 02:25:03.409567  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 69/120
	I0229 02:25:04.411689  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 70/120
	I0229 02:25:05.413000  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 71/120
	I0229 02:25:06.414598  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 72/120
	I0229 02:25:07.415998  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 73/120
	I0229 02:25:08.417652  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 74/120
	I0229 02:25:09.419560  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 75/120
	I0229 02:25:10.421081  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 76/120
	I0229 02:25:11.422570  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 77/120
	I0229 02:25:12.424701  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 78/120
	I0229 02:25:13.426510  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 79/120
	I0229 02:25:14.428667  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 80/120
	I0229 02:25:15.431002  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 81/120
	I0229 02:25:16.432372  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 82/120
	I0229 02:25:17.433769  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 83/120
	I0229 02:25:18.435392  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 84/120
	I0229 02:25:19.437439  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 85/120
	I0229 02:25:20.438760  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 86/120
	I0229 02:25:21.440181  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 87/120
	I0229 02:25:22.441604  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 88/120
	I0229 02:25:23.442826  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 89/120
	I0229 02:25:24.444872  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 90/120
	I0229 02:25:25.446241  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 91/120
	I0229 02:25:26.447342  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 92/120
	I0229 02:25:27.448614  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 93/120
	I0229 02:25:28.450108  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 94/120
	I0229 02:25:29.452023  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 95/120
	I0229 02:25:30.453430  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 96/120
	I0229 02:25:31.454941  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 97/120
	I0229 02:25:32.456247  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 98/120
	I0229 02:25:33.457521  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 99/120
	I0229 02:25:34.459729  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 100/120
	I0229 02:25:35.461111  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 101/120
	I0229 02:25:36.462415  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 102/120
	I0229 02:25:37.464653  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 103/120
	I0229 02:25:38.465995  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 104/120
	I0229 02:25:39.468145  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 105/120
	I0229 02:25:40.469676  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 106/120
	I0229 02:25:41.471070  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 107/120
	I0229 02:25:42.472741  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 108/120
	I0229 02:25:43.474010  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 109/120
	I0229 02:25:44.476300  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 110/120
	I0229 02:25:45.477847  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 111/120
	I0229 02:25:46.479270  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 112/120
	I0229 02:25:47.480762  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 113/120
	I0229 02:25:48.482176  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 114/120
	I0229 02:25:49.484326  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 115/120
	I0229 02:25:50.485728  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 116/120
	I0229 02:25:51.487295  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 117/120
	I0229 02:25:52.488637  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 118/120
	I0229 02:25:53.489901  368937 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for machine to stop 119/120
	I0229 02:25:54.490958  368937 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0229 02:25:54.491052  368937 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0229 02:25:54.493045  368937 out.go:177] 
	W0229 02:25:54.494511  368937 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0229 02:25:54.494533  368937 out.go:239] * 
	* 
	W0229 02:25:54.497571  368937 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:25:54.498822  368937 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-071485 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071485 -n default-k8s-diff-port-071485
E0229 02:25:59.745430  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:26:02.856960  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:26:08.821773  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071485 -n default-k8s-diff-port-071485: exit status 3 (18.445861127s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:26:12.946589  369659 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.233:22: connect: no route to host
	E0229 02:26:12.946618  369659 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.233:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-071485" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-275488 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-275488 create -f testdata/busybox.yaml: exit status 1 (46.3976ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-275488" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-275488 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488: exit status 6 (247.548691ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:25:01.378556  369151 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-275488" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-275488" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488: exit status 6 (233.193639ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:25:01.612417  369183 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-275488" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-275488" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-275488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-275488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m32.702078731s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-275488 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-275488 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-275488 describe deploy/metrics-server -n kube-system: exit status 1 (48.455798ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-275488" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-275488 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488: exit status 6 (248.277474ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:26:34.611999  369936 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-275488" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-275488" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (93.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-915633 -n embed-certs-915633
E0229 02:25:22.532962  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:25:23.173866  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:25:24.454191  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-915633 -n embed-certs-915633: exit status 3 (3.16871713s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:25:25.426648  369348 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host
	E0229 02:25:25.426679  369348 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-915633 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-915633 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153522747s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-915633 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-915633 -n embed-certs-915633
E0229 02:25:32.136000  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:25:32.978828  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-915633 -n embed-certs-915633: exit status 3 (3.06198899s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:25:34.642637  369467 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host
	E0229 02:25:34.642662  369467 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.218:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-915633" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247751 -n no-preload-247751
E0229 02:25:27.015397  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:25:27.858701  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:25:27.863977  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:25:27.874278  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:25:27.894537  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:25:27.934837  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:25:28.015200  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:25:28.175592  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:25:28.496433  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:25:29.136679  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247751 -n no-preload-247751: exit status 3 (3.200161333s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:25:29.554553  369408 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host
	E0229 02:25:29.554574  369408 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-247751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0229 02:25:30.417738  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-247751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151971762s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-247751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247751 -n no-preload-247751
E0229 02:25:38.099860  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247751 -n no-preload-247751: exit status 3 (3.063459699s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:25:38.770618  369550 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host
	E0229 02:25:38.770637  369550 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.114:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-247751" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071485 -n default-k8s-diff-port-071485
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071485 -n default-k8s-diff-port-071485: exit status 3 (3.168543568s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:26:16.114648  369757 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.233:22: connect: no route to host
	E0229 02:26:16.114678  369757 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.233:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-071485 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-071485 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152038451s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.233:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-071485 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071485 -n default-k8s-diff-port-071485
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071485 -n default-k8s-diff-port-071485: exit status 3 (3.063341638s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 02:26:25.330717  369828 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.233:22: connect: no route to host
	E0229 02:26:25.330768  369828 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.233:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-071485" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (781.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-275488 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0229 02:26:36.514378  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:26:36.519679  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:26:36.529977  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:26:36.550248  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:26:36.590562  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:26:36.670923  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:26:36.831389  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:26:37.152000  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:26:37.792930  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:26:39.073391  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:26:41.634358  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:26:43.818143  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:26:46.755018  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:26:49.782176  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:26:56.995252  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:27:02.527166  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:27:04.062023  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
E0229 02:27:17.476270  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:27:21.666106  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:27:31.747980  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
E0229 02:27:58.437345  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:28:05.739963  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:28:10.387989  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:28:11.703441  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:28:38.072588  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:29:09.040103  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 02:29:18.684136  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:29:20.358076  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:29:37.822563  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:29:37.824742  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 02:29:46.368412  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:30:05.507217  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:30:21.895217  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:30:27.859146  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:30:49.580736  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:30:55.543839  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:31:00.881303  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 02:31:36.515543  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:32:04.061857  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
E0229 02:32:04.199156  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:33:10.388339  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:34:09.039825  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 02:34:18.684026  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:34:37.821816  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:34:37.825054  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 02:35:21.895213  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:35:27.858831  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:35:32.089090  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-275488 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: exit status 109 (12m58.268132296s)

                                                
                                                
-- stdout --
	* [old-k8s-version-275488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node old-k8s-version-275488 in cluster old-k8s-version-275488
	* Restarting existing kvm2 VM for "old-k8s-version-275488" ...
	* Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:26:36.132854  370051 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:26:36.133389  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133407  370051 out.go:304] Setting ErrFile to fd 2...
	I0229 02:26:36.133414  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133912  370051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:26:36.134959  370051 out.go:298] Setting JSON to false
	I0229 02:26:36.135907  370051 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7739,"bootTime":1709165857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:26:36.135982  370051 start.go:139] virtualization: kvm guest
	I0229 02:26:36.137916  370051 out.go:177] * [old-k8s-version-275488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:26:36.139510  370051 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:26:36.139543  370051 notify.go:220] Checking for updates...
	I0229 02:26:36.141206  370051 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:26:36.142776  370051 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:26:36.143982  370051 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:26:36.145097  370051 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:26:36.146170  370051 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:26:36.147751  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:26:36.148198  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.148298  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.163969  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0229 02:26:36.164373  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.164977  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.165003  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.165394  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.165584  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.167312  370051 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 02:26:36.168337  370051 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:26:36.168641  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.168683  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.184089  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0229 02:26:36.184605  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.185181  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.185210  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.185551  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.185723  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.222261  370051 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 02:26:36.223363  370051 start.go:299] selected driver: kvm2
	I0229 02:26:36.223374  370051 start.go:903] validating driver "kvm2" against &{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.223487  370051 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:26:36.224130  370051 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.224195  370051 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:26:36.239302  370051 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:26:36.239664  370051 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:26:36.239741  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:26:36.239755  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:26:36.239765  370051 start_flags.go:323] config:
	{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.239908  370051 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.241466  370051 out.go:177] * Starting control plane node old-k8s-version-275488 in cluster old-k8s-version-275488
	I0229 02:26:36.242536  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:26:36.242564  370051 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 02:26:36.242573  370051 cache.go:56] Caching tarball of preloaded images
	I0229 02:26:36.242641  370051 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 02:26:36.242651  370051 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 02:26:36.242742  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:26:36.242905  370051 start.go:365] acquiring machines lock for old-k8s-version-275488: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:30:51.887858  370051 start.go:369] acquired machines lock for "old-k8s-version-275488" in 4m15.644916266s
	I0229 02:30:51.887935  370051 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:51.887944  370051 fix.go:54] fixHost starting: 
	I0229 02:30:51.888374  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:51.888428  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:51.905851  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0229 02:30:51.906292  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:51.906778  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:30:51.906806  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:51.907250  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:51.907459  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:30:51.907631  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetState
	I0229 02:30:51.909061  370051 fix.go:102] recreateIfNeeded on old-k8s-version-275488: state=Stopped err=<nil>
	I0229 02:30:51.909093  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	W0229 02:30:51.909251  370051 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:51.911318  370051 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275488" ...
	I0229 02:30:51.912520  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .Start
	I0229 02:30:51.912688  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring networks are active...
	I0229 02:30:51.913511  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network default is active
	I0229 02:30:51.913929  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network mk-old-k8s-version-275488 is active
	I0229 02:30:51.914378  370051 main.go:141] libmachine: (old-k8s-version-275488) Getting domain xml...
	I0229 02:30:51.915191  370051 main.go:141] libmachine: (old-k8s-version-275488) Creating domain...
	I0229 02:30:53.179261  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting to get IP...
	I0229 02:30:53.180359  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.180800  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.180922  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.180789  370858 retry.go:31] will retry after 282.360524ms: waiting for machine to come up
	I0229 02:30:53.465135  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.465708  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.465742  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.465651  370858 retry.go:31] will retry after 341.876004ms: waiting for machine to come up
	I0229 02:30:53.809322  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.809734  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.809876  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.809797  370858 retry.go:31] will retry after 356.208548ms: waiting for machine to come up
	I0229 02:30:54.167329  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.167824  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.167852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.167760  370858 retry.go:31] will retry after 395.76503ms: waiting for machine to come up
	I0229 02:30:54.565496  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.565976  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.566004  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.565933  370858 retry.go:31] will retry after 617.898012ms: waiting for machine to come up
	I0229 02:30:55.185679  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:55.186193  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:55.186237  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:55.186144  370858 retry.go:31] will retry after 911.947678ms: waiting for machine to come up
	I0229 02:30:56.099334  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:56.099788  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:56.099815  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:56.099726  370858 retry.go:31] will retry after 1.132066509s: waiting for machine to come up
	I0229 02:30:57.233610  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:57.234113  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:57.234145  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:57.234063  370858 retry.go:31] will retry after 1.238348525s: waiting for machine to come up
	I0229 02:30:58.474146  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:58.474696  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:58.474733  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:58.474642  370858 retry.go:31] will retry after 1.373712981s: waiting for machine to come up
	I0229 02:30:59.850075  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:59.850504  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:59.850526  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:59.850460  370858 retry.go:31] will retry after 2.156069813s: waiting for machine to come up
	I0229 02:31:02.007911  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:02.008381  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:02.008409  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:02.008330  370858 retry.go:31] will retry after 1.864134048s: waiting for machine to come up
	I0229 02:31:03.873997  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:03.874606  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:03.874653  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:03.874547  370858 retry.go:31] will retry after 2.45659808s: waiting for machine to come up
	I0229 02:31:06.333259  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:06.333776  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:06.333811  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:06.333733  370858 retry.go:31] will retry after 3.223893936s: waiting for machine to come up
	I0229 02:31:09.559349  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:09.559937  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:09.559968  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:09.559891  370858 retry.go:31] will retry after 5.278822831s: waiting for machine to come up
	I0229 02:31:14.842498  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843049  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has current primary IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843083  370051 main.go:141] libmachine: (old-k8s-version-275488) Found IP for machine: 192.168.39.160
	I0229 02:31:14.843112  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserving static IP address...
	I0229 02:31:14.843485  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.843510  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | skip adding static IP to network mk-old-k8s-version-275488 - found existing host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"}
	I0229 02:31:14.843525  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserved static IP address: 192.168.39.160
	I0229 02:31:14.843535  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Getting to WaitForSSH function...
	I0229 02:31:14.843553  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting for SSH to be available...
	I0229 02:31:14.845739  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846087  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.846120  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846289  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH client type: external
	I0229 02:31:14.846319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa (-rw-------)
	I0229 02:31:14.846355  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:31:14.846372  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | About to run SSH command:
	I0229 02:31:14.846390  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | exit 0
	I0229 02:31:14.979384  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | SSH cmd err, output: <nil>: 
	I0229 02:31:14.979896  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetConfigRaw
	I0229 02:31:14.980716  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:14.983852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984278  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.984319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984639  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:31:14.984865  370051 machine.go:88] provisioning docker machine ...
	I0229 02:31:14.984890  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:14.985140  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985324  370051 buildroot.go:166] provisioning hostname "old-k8s-version-275488"
	I0229 02:31:14.985347  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985494  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:14.988036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988438  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.988464  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988633  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:14.988829  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989003  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989174  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:14.989361  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:14.989604  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:14.989621  370051 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275488 && echo "old-k8s-version-275488" | sudo tee /etc/hostname
	I0229 02:31:15.125564  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275488
	
	I0229 02:31:15.125605  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.128963  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129570  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.129652  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129735  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.129996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130185  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130380  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.130616  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.130872  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.130900  370051 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275488/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:15.272298  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:15.272337  370051 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:31:15.272368  370051 buildroot.go:174] setting up certificates
	I0229 02:31:15.272385  370051 provision.go:83] configureAuth start
	I0229 02:31:15.272402  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:15.272772  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:15.276382  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.276838  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.276869  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.277051  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.279927  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280298  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.280326  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280505  370051 provision.go:138] copyHostCerts
	I0229 02:31:15.280555  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:31:15.280566  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:31:15.280619  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:31:15.280749  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:31:15.280764  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:31:15.280789  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:31:15.280857  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:31:15.280871  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:31:15.280891  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:31:15.280954  370051 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275488 san=[192.168.39.160 192.168.39.160 localhost 127.0.0.1 minikube old-k8s-version-275488]
	I0229 02:31:15.360428  370051 provision.go:172] copyRemoteCerts
	I0229 02:31:15.360487  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:31:15.360512  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.363540  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.363931  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.363966  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.364154  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.364337  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.364495  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.364622  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.453643  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:31:15.483233  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 02:31:15.512164  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:31:15.543453  370051 provision.go:86] duration metric: configureAuth took 271.048547ms
	I0229 02:31:15.543484  370051 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:31:15.543705  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:31:15.543816  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.546472  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.546807  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.546835  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.547049  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.547272  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547455  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547662  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.547861  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.548035  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.548052  370051 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:31:15.835533  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:31:15.835572  370051 machine.go:91] provisioned docker machine in 850.691497ms
	I0229 02:31:15.835589  370051 start.go:300] post-start starting for "old-k8s-version-275488" (driver="kvm2")
	I0229 02:31:15.835604  370051 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:31:15.835635  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:15.835995  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:31:15.836025  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.838946  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839297  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.839330  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839460  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.839665  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.839839  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.840008  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.925849  370051 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:31:15.931227  370051 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:31:15.931260  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:31:15.931363  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:31:15.931465  370051 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:31:15.931574  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:31:15.942500  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:15.972803  370051 start.go:303] post-start completed in 137.19736ms
	I0229 02:31:15.972838  370051 fix.go:56] fixHost completed within 24.084893996s
	I0229 02:31:15.972873  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.975698  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976063  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.976093  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976279  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.976518  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976659  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976795  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.976959  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.977119  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.977130  370051 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 02:31:16.095892  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173876.041987567
	
	I0229 02:31:16.095917  370051 fix.go:206] guest clock: 1709173876.041987567
	I0229 02:31:16.095927  370051 fix.go:219] Guest: 2024-02-29 02:31:16.041987567 +0000 UTC Remote: 2024-02-29 02:31:15.972843681 +0000 UTC m=+279.886639354 (delta=69.143886ms)
	I0229 02:31:16.095954  370051 fix.go:190] guest clock delta is within tolerance: 69.143886ms
	I0229 02:31:16.095962  370051 start.go:83] releasing machines lock for "old-k8s-version-275488", held for 24.208056775s
	I0229 02:31:16.095996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.096336  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:16.099518  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100016  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.100060  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100189  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100751  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100955  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.101035  370051 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:31:16.101084  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.101167  370051 ssh_runner.go:195] Run: cat /version.json
	I0229 02:31:16.101190  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.104588  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.104638  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105000  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105059  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105101  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105335  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105546  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105590  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105821  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105832  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106002  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:16.106028  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106180  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:16.194120  370051 ssh_runner.go:195] Run: systemctl --version
	I0229 02:31:16.220808  370051 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:31:16.386082  370051 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:31:16.393419  370051 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:31:16.393512  370051 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:31:16.418966  370051 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:31:16.419003  370051 start.go:475] detecting cgroup driver to use...
	I0229 02:31:16.419087  370051 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:31:16.444372  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:31:16.466354  370051 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:31:16.466430  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:31:16.488710  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:31:16.509561  370051 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:31:16.651716  370051 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:31:16.840453  370051 docker.go:233] disabling docker service ...
	I0229 02:31:16.840538  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:31:16.869611  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:31:16.890123  370051 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:31:17.047701  370051 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:31:17.225457  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:31:17.248553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:31:17.275486  370051 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 02:31:17.275572  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.290350  370051 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:31:17.290437  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.304093  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.320562  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.339790  370051 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:31:17.356570  370051 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:31:17.371208  370051 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:31:17.371303  370051 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:31:17.390748  370051 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:31:17.405750  370051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:31:17.555023  370051 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:31:17.754419  370051 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:31:17.754508  370051 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:31:17.760190  370051 start.go:543] Will wait 60s for crictl version
	I0229 02:31:17.760245  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:17.765195  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:31:17.815839  370051 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:31:17.815953  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.857470  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.896796  370051 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 02:31:17.898162  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:17.901332  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.901809  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:17.901840  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.902046  370051 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:31:17.907256  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:17.924135  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:31:17.924218  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:17.986923  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:17.986992  370051 ssh_runner.go:195] Run: which lz4
	I0229 02:31:17.992110  370051 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 02:31:17.997252  370051 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:31:17.997287  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 02:31:20.124958  370051 crio.go:444] Took 2.132885 seconds to copy over tarball
	I0229 02:31:20.125075  370051 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:31:23.625489  370051 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.500380961s)
	I0229 02:31:23.625526  370051 crio.go:451] Took 3.500531 seconds to extract the tarball
	I0229 02:31:23.625536  370051 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:31:23.671458  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:23.714048  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:23.714087  370051 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:31:23.714189  370051 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.714213  370051 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.714309  370051 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 02:31:23.714424  370051 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.714269  370051 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.714461  370051 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.714519  370051 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.714192  370051 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.716086  370051 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.716076  370051 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716088  370051 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.716143  370051 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.716081  370051 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.716275  370051 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 02:31:23.838722  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.844569  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 02:31:23.853089  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.857738  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.864060  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.865519  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.926256  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.997349  370051 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 02:31:23.997401  370051 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.997463  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.010625  370051 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 02:31:24.010674  370051 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 02:31:24.010722  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083140  370051 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 02:31:24.083203  370051 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 02:31:24.083232  370051 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 02:31:24.083247  370051 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.083266  370051 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.083269  370051 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.083308  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083319  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083364  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083166  370051 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 02:31:24.083426  370051 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.083471  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123878  370051 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 02:31:24.123928  370051 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.123972  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123982  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:24.123973  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 02:31:24.124043  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.124051  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.124097  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.124153  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.152226  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.270585  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 02:31:24.305436  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 02:31:24.305532  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 02:31:24.305621  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 02:31:24.305629  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 02:31:24.305799  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 02:31:24.316950  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 02:31:24.635837  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:24.791670  370051 cache_images.go:92] LoadImages completed in 1.077558745s
	W0229 02:31:24.791798  370051 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0229 02:31:24.791902  370051 ssh_runner.go:195] Run: crio config
	I0229 02:31:24.851132  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:31:24.851164  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:24.851189  370051 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:31:24.851213  370051 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275488 NodeName:old-k8s-version-275488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 02:31:24.851423  370051 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-275488
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.160:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:31:24.851524  370051 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275488 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:31:24.851598  370051 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 02:31:24.864237  370051 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:31:24.864330  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:31:24.879552  370051 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 02:31:24.901027  370051 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:31:24.920638  370051 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 02:31:24.941894  370051 ssh_runner.go:195] Run: grep 192.168.39.160	control-plane.minikube.internal$ /etc/hosts
	I0229 02:31:24.947439  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:24.962396  370051 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488 for IP: 192.168.39.160
	I0229 02:31:24.962435  370051 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:24.962621  370051 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:31:24.962673  370051 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:31:24.962781  370051 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/client.key
	I0229 02:31:24.962851  370051 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key.80b25619
	I0229 02:31:24.962919  370051 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key
	I0229 02:31:24.963087  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:31:24.963126  370051 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:31:24.963138  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:31:24.963185  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:31:24.963213  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:31:24.963245  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:31:24.963296  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:24.963980  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:31:24.996049  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:31:25.030503  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:31:25.057695  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:31:25.091982  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:31:25.126636  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:31:25.156613  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:31:25.186480  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:31:25.221012  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:31:25.254122  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:31:25.282646  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:31:25.312624  370051 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:31:25.335020  370051 ssh_runner.go:195] Run: openssl version
	I0229 02:31:25.342920  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:31:25.355808  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361349  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361433  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.368335  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:31:25.380799  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:31:25.393069  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398466  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398539  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.404776  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:31:25.416735  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:31:25.428884  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434503  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434584  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.441187  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:31:25.453174  370051 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:31:25.458712  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:31:25.466032  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:31:25.473895  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:31:25.482948  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:31:25.491808  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:31:25.499003  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:31:25.506691  370051 kubeadm.go:404] StartCluster: {Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:25.506829  370051 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:31:25.506883  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:25.551867  370051 cri.go:89] found id: ""
	I0229 02:31:25.551970  370051 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:31:25.564446  370051 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:31:25.564476  370051 kubeadm.go:636] restartCluster start
	I0229 02:31:25.564545  370051 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:31:25.576275  370051 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:25.577406  370051 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-275488" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:31:25.578043  370051 kubeconfig.go:146] "old-k8s-version-275488" context is missing from /home/jenkins/minikube-integration/18063-316644/kubeconfig - will repair!
	I0229 02:31:25.578979  370051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:25.580805  370051 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:31:25.592154  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:25.592259  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:25.609268  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:26.092701  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.092827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.108636  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:26.592320  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.592412  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.606907  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.092891  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.093028  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.112353  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.592956  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.593058  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.612315  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.092611  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.092729  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.108095  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.592592  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.592679  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.612145  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.092605  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.092720  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.113807  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.593002  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.593085  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.609337  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.092667  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.092757  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.112800  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.592328  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.592415  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.610909  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:31.092418  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.092529  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.109490  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:31.593046  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.593128  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.608148  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.092187  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.092299  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.107573  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.593184  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.593312  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.607993  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.092500  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.092603  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.107359  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.592987  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.593101  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.608041  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.092919  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.093023  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.107597  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.593200  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.593295  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.608100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.092589  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.092683  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.107100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.592815  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.592928  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.610879  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.610916  370051 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:35.610930  370051 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:35.610947  370051 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:35.611032  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:35.660059  370051 cri.go:89] found id: ""
	I0229 02:31:35.660146  370051 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:35.682067  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:35.694455  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:35.694542  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707118  370051 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707149  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:35.834811  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:36.790885  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.042778  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.130251  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.215289  370051 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:37.215384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:37.715589  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.715938  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.215781  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.716505  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.216238  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.716182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:41.216236  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:41.716082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.215537  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.715524  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.215873  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.715634  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.216464  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.715519  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.216430  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.716196  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:46.215715  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:46.715657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.216495  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.715491  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.215459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.715556  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.215675  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.716046  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.215993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.715594  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:51.215927  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:51.715888  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.215659  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.715769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.216175  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.715755  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.216468  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.216280  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.715924  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.215653  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.715898  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.215954  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.216366  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.716093  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.215944  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.715553  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.216341  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.715677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.216197  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.716302  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.216170  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.715615  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.216580  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.716088  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.215743  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.716142  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.216543  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.715853  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:06.216206  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:06.715748  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.215964  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.716419  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.216034  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.715611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.216207  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.716408  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.216144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.716454  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:11.215611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:11.716198  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.216332  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.716413  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.216407  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.716466  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.216182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.716285  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.215995  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.715613  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:16.215530  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:16.716420  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.216031  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.716303  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.216082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.715523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.216166  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.716503  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.215680  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.715770  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:21.215523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:21.715617  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.216133  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.716029  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.216141  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.715578  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.215640  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.715601  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.215959  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.716394  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:26.215946  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:26.715834  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.216243  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.715581  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.215521  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.715849  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.716497  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.215657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.715492  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:31.216322  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:31.716160  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.215557  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.715618  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.215761  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.716216  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.216460  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.716244  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.215551  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.715633  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:36.215910  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:36.716307  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:37.216308  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:37.216404  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:37.262324  370051 cri.go:89] found id: ""
	I0229 02:32:37.262358  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.262370  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:37.262378  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:37.262442  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:37.303758  370051 cri.go:89] found id: ""
	I0229 02:32:37.303790  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.303802  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:37.303809  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:37.303880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:37.349512  370051 cri.go:89] found id: ""
	I0229 02:32:37.349538  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.349546  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:37.349553  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:37.349607  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:37.389630  370051 cri.go:89] found id: ""
	I0229 02:32:37.389657  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.389668  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:37.389676  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:37.389752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:37.435918  370051 cri.go:89] found id: ""
	I0229 02:32:37.435954  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.435967  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:37.435976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:37.436044  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:37.479336  370051 cri.go:89] found id: ""
	I0229 02:32:37.479369  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.479377  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:37.479384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:37.479460  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:37.519944  370051 cri.go:89] found id: ""
	I0229 02:32:37.519979  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.519991  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:37.519999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:37.520071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:37.563848  370051 cri.go:89] found id: ""
	I0229 02:32:37.563875  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.563884  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:37.563895  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:37.563915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:37.607989  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:37.608025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:37.660272  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:37.660324  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:37.676878  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:37.676909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:37.805099  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:37.805132  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:37.805159  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.378467  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:40.393066  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:40.393221  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:40.432592  370051 cri.go:89] found id: ""
	I0229 02:32:40.432619  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.432628  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:40.432634  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:40.432693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:40.473651  370051 cri.go:89] found id: ""
	I0229 02:32:40.473706  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.473716  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:40.473722  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:40.473781  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:40.520262  370051 cri.go:89] found id: ""
	I0229 02:32:40.520292  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.520303  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:40.520312  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:40.520374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:40.560359  370051 cri.go:89] found id: ""
	I0229 02:32:40.560393  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.560402  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:40.560408  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:40.560474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:40.602145  370051 cri.go:89] found id: ""
	I0229 02:32:40.602173  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.602181  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:40.602187  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:40.602266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:40.640744  370051 cri.go:89] found id: ""
	I0229 02:32:40.640778  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.640791  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:40.640799  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:40.640869  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:40.681863  370051 cri.go:89] found id: ""
	I0229 02:32:40.681895  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.681908  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:40.681916  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:40.681985  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:40.725859  370051 cri.go:89] found id: ""
	I0229 02:32:40.725890  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.725899  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:40.725910  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:40.725924  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.794666  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:40.794705  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:40.854173  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:40.854215  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:40.901744  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:40.901786  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:40.925331  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:40.925371  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:41.005785  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:43.506756  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:43.522038  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:43.522135  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:43.559609  370051 cri.go:89] found id: ""
	I0229 02:32:43.559635  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.559642  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:43.559649  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:43.559707  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:43.609059  370051 cri.go:89] found id: ""
	I0229 02:32:43.609087  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.609096  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:43.609102  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:43.609159  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:43.648988  370051 cri.go:89] found id: ""
	I0229 02:32:43.649018  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.649029  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:43.649037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:43.649104  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:43.690995  370051 cri.go:89] found id: ""
	I0229 02:32:43.691028  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.691042  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:43.691054  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:43.691120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:43.729221  370051 cri.go:89] found id: ""
	I0229 02:32:43.729249  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.729257  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:43.729263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:43.729334  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:43.767141  370051 cri.go:89] found id: ""
	I0229 02:32:43.767174  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.767186  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:43.767194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:43.767266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:43.807926  370051 cri.go:89] found id: ""
	I0229 02:32:43.807962  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.807970  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:43.807976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:43.808029  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:43.857945  370051 cri.go:89] found id: ""
	I0229 02:32:43.857973  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.857981  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:43.857991  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:43.858005  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:43.941290  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:43.941338  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:43.986788  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:43.986823  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:44.037384  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:44.037421  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:44.052668  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:44.052696  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:44.127124  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:46.627409  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:46.642306  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:46.642397  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:46.685358  370051 cri.go:89] found id: ""
	I0229 02:32:46.685389  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.685400  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:46.685431  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:46.685493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:46.724996  370051 cri.go:89] found id: ""
	I0229 02:32:46.725026  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.725035  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:46.725041  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:46.725113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:46.765815  370051 cri.go:89] found id: ""
	I0229 02:32:46.765849  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.765857  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:46.765863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:46.765924  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:46.808946  370051 cri.go:89] found id: ""
	I0229 02:32:46.808980  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.808991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:46.809000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:46.809068  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:46.865068  370051 cri.go:89] found id: ""
	I0229 02:32:46.865106  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.865119  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:46.865127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:46.865200  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:46.932233  370051 cri.go:89] found id: ""
	I0229 02:32:46.932260  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.932268  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:46.932275  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:46.932331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:46.985701  370051 cri.go:89] found id: ""
	I0229 02:32:46.985732  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.985744  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:46.985752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:46.985819  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:47.027497  370051 cri.go:89] found id: ""
	I0229 02:32:47.027524  370051 logs.go:276] 0 containers: []
	W0229 02:32:47.027536  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:47.027548  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:47.027565  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:47.075955  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:47.075990  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:47.093922  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:47.093949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:47.165000  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:47.165029  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:47.165046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:47.250161  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:47.250201  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:49.794654  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:49.809706  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:49.809787  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:49.868163  370051 cri.go:89] found id: ""
	I0229 02:32:49.868197  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.868217  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:49.868223  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:49.868277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:49.928462  370051 cri.go:89] found id: ""
	I0229 02:32:49.928495  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.928508  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:49.928516  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:49.928580  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:49.975725  370051 cri.go:89] found id: ""
	I0229 02:32:49.975755  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.975765  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:49.975774  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:49.975849  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:50.017007  370051 cri.go:89] found id: ""
	I0229 02:32:50.017036  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.017046  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:50.017051  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:50.017118  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:50.054522  370051 cri.go:89] found id: ""
	I0229 02:32:50.054551  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.054560  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:50.054566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:50.054620  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:50.096274  370051 cri.go:89] found id: ""
	I0229 02:32:50.096300  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.096308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:50.096319  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:50.096382  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:50.142543  370051 cri.go:89] found id: ""
	I0229 02:32:50.142581  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.142590  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:50.142597  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:50.142667  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:50.182452  370051 cri.go:89] found id: ""
	I0229 02:32:50.182482  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.182492  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:50.182505  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:50.182522  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:50.266311  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:50.266355  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:50.309277  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:50.309322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:50.360492  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:50.360536  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:50.376711  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:50.376744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:50.447128  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:52.947926  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:52.970209  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:52.970317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:53.010840  370051 cri.go:89] found id: ""
	I0229 02:32:53.010868  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.010878  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:53.010886  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:53.010983  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:53.049458  370051 cri.go:89] found id: ""
	I0229 02:32:53.049490  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.049503  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:53.049511  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:53.049578  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:53.088615  370051 cri.go:89] found id: ""
	I0229 02:32:53.088646  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.088656  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:53.088671  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:53.088738  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:53.130176  370051 cri.go:89] found id: ""
	I0229 02:32:53.130210  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.130237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:53.130247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:53.130317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:53.177876  370051 cri.go:89] found id: ""
	I0229 02:32:53.177908  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.177920  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:53.177928  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:53.177991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:53.216036  370051 cri.go:89] found id: ""
	I0229 02:32:53.216065  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.216074  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:53.216080  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:53.216143  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:53.254673  370051 cri.go:89] found id: ""
	I0229 02:32:53.254705  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.254716  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:53.254724  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:53.254785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:53.291508  370051 cri.go:89] found id: ""
	I0229 02:32:53.291539  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.291551  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:53.291564  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:53.291581  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:53.343312  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:53.343354  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:53.359264  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:53.359294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:53.431396  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:53.431428  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:53.431445  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:53.512494  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:53.512529  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:56.057340  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:56.073074  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:56.073158  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:56.111650  370051 cri.go:89] found id: ""
	I0229 02:32:56.111684  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.111704  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:56.111713  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:56.111785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:56.150147  370051 cri.go:89] found id: ""
	I0229 02:32:56.150178  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.150191  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:56.150200  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:56.150280  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:56.192842  370051 cri.go:89] found id: ""
	I0229 02:32:56.192878  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.192890  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:56.192898  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:56.192969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:56.232013  370051 cri.go:89] found id: ""
	I0229 02:32:56.232051  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.232062  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:56.232079  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:56.232151  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:56.273824  370051 cri.go:89] found id: ""
	I0229 02:32:56.273858  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.273871  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:56.273882  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:56.273949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:56.312112  370051 cri.go:89] found id: ""
	I0229 02:32:56.312139  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.312147  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:56.312153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:56.312203  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:56.352558  370051 cri.go:89] found id: ""
	I0229 02:32:56.352585  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.352593  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:56.352600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:56.352666  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:56.397719  370051 cri.go:89] found id: ""
	I0229 02:32:56.397762  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.397775  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:56.397790  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:56.397808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:56.447793  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:56.447831  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:56.463859  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:56.463894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:56.540306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:56.540333  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:56.540347  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:56.633201  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:56.633247  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.207459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:59.222165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:59.222271  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:59.261197  370051 cri.go:89] found id: ""
	I0229 02:32:59.261230  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.261242  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:59.261251  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:59.261338  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:59.300874  370051 cri.go:89] found id: ""
	I0229 02:32:59.300917  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.300940  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:59.300950  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:59.301025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:59.345399  370051 cri.go:89] found id: ""
	I0229 02:32:59.345435  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.345446  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:59.345455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:59.345525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:59.386068  370051 cri.go:89] found id: ""
	I0229 02:32:59.386102  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.386112  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:59.386132  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:59.386184  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:59.436597  370051 cri.go:89] found id: ""
	I0229 02:32:59.436629  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.436641  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:59.436649  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:59.436708  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:59.481417  370051 cri.go:89] found id: ""
	I0229 02:32:59.481446  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.481462  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:59.481469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:59.481535  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:59.527725  370051 cri.go:89] found id: ""
	I0229 02:32:59.527752  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.527763  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:59.527771  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:59.527845  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:59.574502  370051 cri.go:89] found id: ""
	I0229 02:32:59.574535  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.574547  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:59.574561  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:59.574579  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:59.669584  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:59.669630  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.730049  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:59.730096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:59.779562  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:59.779613  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:59.797016  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:59.797046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:59.876438  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:02.377144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:02.391585  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:02.391682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:02.432359  370051 cri.go:89] found id: ""
	I0229 02:33:02.432390  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.432399  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:02.432406  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:02.432462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:02.476733  370051 cri.go:89] found id: ""
	I0229 02:33:02.476768  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.476781  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:02.476790  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:02.476856  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:02.521414  370051 cri.go:89] found id: ""
	I0229 02:33:02.521440  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.521448  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:02.521454  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:02.521513  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:02.561663  370051 cri.go:89] found id: ""
	I0229 02:33:02.561690  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.561698  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:02.561704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:02.561755  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:02.611953  370051 cri.go:89] found id: ""
	I0229 02:33:02.611989  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.612002  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:02.612010  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:02.612079  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:02.663254  370051 cri.go:89] found id: ""
	I0229 02:33:02.663282  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.663290  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:02.663297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:02.663348  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:02.721449  370051 cri.go:89] found id: ""
	I0229 02:33:02.721484  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.721497  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:02.721506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:02.721579  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:02.761197  370051 cri.go:89] found id: ""
	I0229 02:33:02.761239  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.761251  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:02.761265  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:02.761282  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:02.810457  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:02.810498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:02.828906  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:02.828940  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:02.911895  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:02.911932  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:02.911945  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:02.995120  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:02.995152  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:05.544629  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:05.559266  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:05.559342  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:05.609673  370051 cri.go:89] found id: ""
	I0229 02:33:05.609706  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.609718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:05.609727  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:05.609795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:05.665161  370051 cri.go:89] found id: ""
	I0229 02:33:05.665192  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.665203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:05.665211  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:05.665282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:05.719923  370051 cri.go:89] found id: ""
	I0229 02:33:05.719949  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.719957  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:05.719963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:05.720025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:05.765189  370051 cri.go:89] found id: ""
	I0229 02:33:05.765224  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.765237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:05.765245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:05.765357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:05.803788  370051 cri.go:89] found id: ""
	I0229 02:33:05.803820  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.803829  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:05.803836  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:05.803909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:05.842152  370051 cri.go:89] found id: ""
	I0229 02:33:05.842178  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.842188  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:05.842197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:05.842278  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:05.885042  370051 cri.go:89] found id: ""
	I0229 02:33:05.885071  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.885084  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:05.885092  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:05.885156  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:05.926032  370051 cri.go:89] found id: ""
	I0229 02:33:05.926069  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.926082  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:05.926096  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:05.926112  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:06.014702  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:06.014744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:06.063510  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:06.063550  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:06.114215  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:06.114272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:06.130132  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:06.130169  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:06.205692  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:08.706549  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:08.722548  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:08.722614  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:08.768518  370051 cri.go:89] found id: ""
	I0229 02:33:08.768553  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.768564  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:08.768572  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:08.768630  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:08.804600  370051 cri.go:89] found id: ""
	I0229 02:33:08.804630  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.804643  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:08.804651  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:08.804721  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:08.842466  370051 cri.go:89] found id: ""
	I0229 02:33:08.842497  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.842510  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:08.842518  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:08.842589  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:08.878384  370051 cri.go:89] found id: ""
	I0229 02:33:08.878412  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.878421  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:08.878427  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:08.878484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:08.924228  370051 cri.go:89] found id: ""
	I0229 02:33:08.924262  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.924275  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:08.924295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:08.924374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:08.966122  370051 cri.go:89] found id: ""
	I0229 02:33:08.966157  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.966168  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:08.966177  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:08.966254  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:09.011109  370051 cri.go:89] found id: ""
	I0229 02:33:09.011135  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.011144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:09.011152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:09.011217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:09.059716  370051 cri.go:89] found id: ""
	I0229 02:33:09.059749  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.059782  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:09.059795  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:09.059812  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:09.110564  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:09.110599  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:09.126037  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:09.126065  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:09.199827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:09.199858  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:09.199892  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:09.282624  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:09.282661  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:11.829017  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:11.842826  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:11.842894  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:11.881652  370051 cri.go:89] found id: ""
	I0229 02:33:11.881689  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.881700  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:11.881709  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:11.881773  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:11.919252  370051 cri.go:89] found id: ""
	I0229 02:33:11.919291  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.919302  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:11.919309  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:11.919380  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:11.959145  370051 cri.go:89] found id: ""
	I0229 02:33:11.959175  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.959187  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:11.959196  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:11.959263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:12.002105  370051 cri.go:89] found id: ""
	I0229 02:33:12.002134  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.002145  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:12.002153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:12.002219  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:12.042157  370051 cri.go:89] found id: ""
	I0229 02:33:12.042188  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.042221  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:12.042249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:12.042326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:12.080121  370051 cri.go:89] found id: ""
	I0229 02:33:12.080150  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.080158  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:12.080165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:12.080231  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:12.119259  370051 cri.go:89] found id: ""
	I0229 02:33:12.119286  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.119294  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:12.119301  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:12.119357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:12.160136  370051 cri.go:89] found id: ""
	I0229 02:33:12.160171  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.160182  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:12.160195  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:12.160209  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:12.209770  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:12.209810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:12.226429  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:12.226460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:12.295933  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:12.295966  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:12.295978  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:12.380794  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:12.380843  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:14.971692  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:14.986085  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:14.986162  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:15.024756  370051 cri.go:89] found id: ""
	I0229 02:33:15.024788  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.024801  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:15.024809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:15.024868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:15.065131  370051 cri.go:89] found id: ""
	I0229 02:33:15.065159  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.065172  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:15.065180  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:15.065251  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:15.104744  370051 cri.go:89] found id: ""
	I0229 02:33:15.104775  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.104786  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:15.104794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:15.104858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:15.145710  370051 cri.go:89] found id: ""
	I0229 02:33:15.145737  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.145745  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:15.145752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:15.145803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:15.184908  370051 cri.go:89] found id: ""
	I0229 02:33:15.184933  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.184942  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:15.184951  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:15.185016  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:15.230195  370051 cri.go:89] found id: ""
	I0229 02:33:15.230220  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.230241  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:15.230249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:15.230326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:15.269750  370051 cri.go:89] found id: ""
	I0229 02:33:15.269774  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.269783  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:15.269789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:15.269852  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:15.312331  370051 cri.go:89] found id: ""
	I0229 02:33:15.312360  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.312373  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:15.312387  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:15.312402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:15.363032  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:15.363067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:15.422421  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:15.422463  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:15.445235  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:15.445272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:15.530010  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:15.530047  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:15.530066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:18.116265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:18.130375  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:18.130439  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:18.167740  370051 cri.go:89] found id: ""
	I0229 02:33:18.167767  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.167776  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:18.167782  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:18.167843  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:18.205621  370051 cri.go:89] found id: ""
	I0229 02:33:18.205653  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.205662  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:18.205670  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:18.205725  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:18.246917  370051 cri.go:89] found id: ""
	I0229 02:33:18.246954  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.246975  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:18.246983  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:18.247040  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:18.285087  370051 cri.go:89] found id: ""
	I0229 02:33:18.285114  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.285123  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:18.285130  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:18.285181  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:18.323989  370051 cri.go:89] found id: ""
	I0229 02:33:18.324018  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.324027  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:18.324033  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:18.324094  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:18.372741  370051 cri.go:89] found id: ""
	I0229 02:33:18.372769  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.372779  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:18.372785  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:18.372838  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:18.432846  370051 cri.go:89] found id: ""
	I0229 02:33:18.432888  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.432900  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:18.432908  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:18.432977  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:18.486357  370051 cri.go:89] found id: ""
	I0229 02:33:18.486387  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.486399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:18.486411  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:18.486431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:18.532363  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:18.532402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:18.582035  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:18.582076  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:18.599009  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:18.599050  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:18.673580  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:18.673609  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:18.673625  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:21.259614  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:21.274150  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:21.274250  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:21.311859  370051 cri.go:89] found id: ""
	I0229 02:33:21.311895  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.311908  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:21.311917  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:21.311984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:21.364260  370051 cri.go:89] found id: ""
	I0229 02:33:21.364296  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.364309  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:21.364317  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:21.364391  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:21.424181  370051 cri.go:89] found id: ""
	I0229 02:33:21.424217  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.424229  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:21.424237  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:21.424306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:21.482499  370051 cri.go:89] found id: ""
	I0229 02:33:21.482531  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.482543  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:21.482551  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:21.482621  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:21.523743  370051 cri.go:89] found id: ""
	I0229 02:33:21.523775  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.523785  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:21.523793  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:21.523868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:21.563759  370051 cri.go:89] found id: ""
	I0229 02:33:21.563789  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.563800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:21.563809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:21.563889  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:21.610162  370051 cri.go:89] found id: ""
	I0229 02:33:21.610265  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.610286  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:21.610295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:21.610378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:21.652001  370051 cri.go:89] found id: ""
	I0229 02:33:21.652028  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.652037  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:21.652047  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:21.652060  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:21.704028  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:21.704067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:21.720924  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:21.720956  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:21.798619  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:21.798645  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:21.798664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:21.888445  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:21.888506  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.437647  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:24.459963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:24.460041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:24.503906  370051 cri.go:89] found id: ""
	I0229 02:33:24.503940  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.503950  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:24.503956  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:24.504031  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:24.541893  370051 cri.go:89] found id: ""
	I0229 02:33:24.541919  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.541929  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:24.541935  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:24.541991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:24.584717  370051 cri.go:89] found id: ""
	I0229 02:33:24.584748  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.584760  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:24.584769  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:24.584836  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:24.623334  370051 cri.go:89] found id: ""
	I0229 02:33:24.623362  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.623371  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:24.623378  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:24.623447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:24.665862  370051 cri.go:89] found id: ""
	I0229 02:33:24.665890  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.665902  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:24.665911  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:24.665984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:24.705509  370051 cri.go:89] found id: ""
	I0229 02:33:24.705540  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.705551  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:24.705560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:24.705634  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:24.745348  370051 cri.go:89] found id: ""
	I0229 02:33:24.745389  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.745399  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:24.745406  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:24.745462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:24.785490  370051 cri.go:89] found id: ""
	I0229 02:33:24.785520  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.785529  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:24.785539  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:24.785553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.829556  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:24.829589  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:24.877914  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:24.877949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:24.894590  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:24.894623  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:24.972948  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:24.972981  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:24.972997  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:27.555364  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:27.570747  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:27.570820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:27.609771  370051 cri.go:89] found id: ""
	I0229 02:33:27.609800  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.609807  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:27.609813  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:27.609863  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:27.654316  370051 cri.go:89] found id: ""
	I0229 02:33:27.654347  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.654360  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:27.654376  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:27.654453  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:27.695089  370051 cri.go:89] found id: ""
	I0229 02:33:27.695125  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.695137  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:27.695143  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:27.695199  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:27.733846  370051 cri.go:89] found id: ""
	I0229 02:33:27.733881  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.733893  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:27.733901  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:27.733972  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:27.772906  370051 cri.go:89] found id: ""
	I0229 02:33:27.772940  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.772953  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:27.772961  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:27.773039  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:27.812266  370051 cri.go:89] found id: ""
	I0229 02:33:27.812295  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.812308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:27.812316  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:27.812387  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:27.849272  370051 cri.go:89] found id: ""
	I0229 02:33:27.849305  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.849316  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:27.849324  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:27.849393  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:27.887495  370051 cri.go:89] found id: ""
	I0229 02:33:27.887528  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.887541  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:27.887554  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:27.887569  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:27.972220  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:27.972261  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:28.020757  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:28.020797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:28.070347  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:28.070381  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:28.089905  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:28.089947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:28.183306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:30.683857  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:30.701341  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:30.701443  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:30.741342  370051 cri.go:89] found id: ""
	I0229 02:33:30.741376  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.741387  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:30.741397  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:30.741475  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:30.785372  370051 cri.go:89] found id: ""
	I0229 02:33:30.785415  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.785427  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:30.785435  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:30.785506  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:30.828402  370051 cri.go:89] found id: ""
	I0229 02:33:30.828428  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.828436  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:30.828442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:30.828504  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:30.872656  370051 cri.go:89] found id: ""
	I0229 02:33:30.872684  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.872695  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:30.872704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:30.872770  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:30.918746  370051 cri.go:89] found id: ""
	I0229 02:33:30.918775  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.918786  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:30.918794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:30.918867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:30.956794  370051 cri.go:89] found id: ""
	I0229 02:33:30.956838  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.956852  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:30.956860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:30.956935  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:31.000595  370051 cri.go:89] found id: ""
	I0229 02:33:31.000618  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.000628  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:31.000637  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:31.000699  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:31.039060  370051 cri.go:89] found id: ""
	I0229 02:33:31.039089  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.039100  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:31.039111  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:31.039133  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:31.089919  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:31.089949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:31.110276  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:31.110315  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:31.235760  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:31.235791  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:31.235810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:31.323257  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:31.323322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:33.872956  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:33.887953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:33.888034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:33.927887  370051 cri.go:89] found id: ""
	I0229 02:33:33.927926  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.927938  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:33.927945  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:33.928001  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:33.967301  370051 cri.go:89] found id: ""
	I0229 02:33:33.967333  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.967345  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:33.967356  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:33.967425  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:34.009949  370051 cri.go:89] found id: ""
	I0229 02:33:34.009982  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.009992  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:34.009999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:34.010073  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:34.056197  370051 cri.go:89] found id: ""
	I0229 02:33:34.056224  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.056232  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:34.056239  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:34.056314  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:34.107089  370051 cri.go:89] found id: ""
	I0229 02:33:34.107120  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.107132  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:34.107140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:34.107206  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:34.162822  370051 cri.go:89] found id: ""
	I0229 02:33:34.162856  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.162875  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:34.162884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:34.162961  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:34.209963  370051 cri.go:89] found id: ""
	I0229 02:33:34.209993  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.210001  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:34.210008  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:34.210078  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:34.250688  370051 cri.go:89] found id: ""
	I0229 02:33:34.250726  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.250735  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:34.250754  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:34.250768  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:34.298953  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:34.298993  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:34.314067  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:34.314100  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:34.393515  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:34.393536  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:34.393551  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:34.477034  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:34.477078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:37.025152  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:37.040410  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:37.040491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:37.077922  370051 cri.go:89] found id: ""
	I0229 02:33:37.077953  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.077965  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:37.077973  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:37.078041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:37.137895  370051 cri.go:89] found id: ""
	I0229 02:33:37.137925  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.137938  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:37.137946  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:37.138012  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:37.199291  370051 cri.go:89] found id: ""
	I0229 02:33:37.199324  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.199336  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:37.199344  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:37.199422  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:37.242817  370051 cri.go:89] found id: ""
	I0229 02:33:37.242848  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.242857  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:37.242863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:37.242917  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:37.282171  370051 cri.go:89] found id: ""
	I0229 02:33:37.282196  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.282204  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:37.282211  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:37.282284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:37.328608  370051 cri.go:89] found id: ""
	I0229 02:33:37.328639  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.328647  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:37.328658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:37.328724  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:37.372965  370051 cri.go:89] found id: ""
	I0229 02:33:37.372996  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.373008  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:37.373016  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:37.373091  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:37.417597  370051 cri.go:89] found id: ""
	I0229 02:33:37.417630  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.417642  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:37.417655  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:37.417673  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:37.472023  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:37.472058  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:37.487931  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:37.487961  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:37.568196  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:37.568227  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:37.568245  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:37.658485  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:37.658523  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.203039  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:40.220385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:40.220477  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:40.262962  370051 cri.go:89] found id: ""
	I0229 02:33:40.262993  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.263004  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:40.263016  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:40.263086  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:40.302452  370051 cri.go:89] found id: ""
	I0229 02:33:40.302483  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.302495  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:40.302503  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:40.302560  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:40.342509  370051 cri.go:89] found id: ""
	I0229 02:33:40.342544  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.342557  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:40.342566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:40.342644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:40.385585  370051 cri.go:89] found id: ""
	I0229 02:33:40.385615  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.385629  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:40.385638  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:40.385703  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:40.426839  370051 cri.go:89] found id: ""
	I0229 02:33:40.426874  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.426887  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:40.426896  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:40.426962  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:40.467217  370051 cri.go:89] found id: ""
	I0229 02:33:40.467241  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.467251  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:40.467257  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:40.467332  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:40.513525  370051 cri.go:89] found id: ""
	I0229 02:33:40.513546  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.513553  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:40.513559  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:40.513609  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:40.554187  370051 cri.go:89] found id: ""
	I0229 02:33:40.554256  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.554269  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:40.554282  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:40.554301  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:40.636447  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:40.636477  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:40.636494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:40.716381  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:40.716423  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.761946  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:40.761982  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:40.812828  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:40.812862  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:43.336139  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:43.352278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:43.352361  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:43.392555  370051 cri.go:89] found id: ""
	I0229 02:33:43.392593  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.392607  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:43.392616  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:43.392689  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:43.438169  370051 cri.go:89] found id: ""
	I0229 02:33:43.438202  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.438216  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:43.438242  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:43.438331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:43.476987  370051 cri.go:89] found id: ""
	I0229 02:33:43.477021  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.477033  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:43.477042  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:43.477109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:43.526728  370051 cri.go:89] found id: ""
	I0229 02:33:43.526758  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.526767  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:43.526778  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:43.526833  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:43.572222  370051 cri.go:89] found id: ""
	I0229 02:33:43.572260  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.572273  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:43.572282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:43.572372  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:43.618650  370051 cri.go:89] found id: ""
	I0229 02:33:43.618679  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.618691  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:43.618698  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:43.618764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:43.658069  370051 cri.go:89] found id: ""
	I0229 02:33:43.658104  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.658116  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:43.658126  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:43.658196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:43.700790  370051 cri.go:89] found id: ""
	I0229 02:33:43.700829  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.700841  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:43.700855  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:43.700874  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:43.753330  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:43.753372  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:43.770261  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:43.770294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:43.842407  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:43.842430  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:43.842447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:43.935427  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:43.935470  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:46.498694  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:46.516463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:46.516541  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:46.554731  370051 cri.go:89] found id: ""
	I0229 02:33:46.554757  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.554766  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:46.554772  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:46.554835  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:46.596851  370051 cri.go:89] found id: ""
	I0229 02:33:46.596892  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.596905  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:46.596912  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:46.596981  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:46.634978  370051 cri.go:89] found id: ""
	I0229 02:33:46.635008  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.635017  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:46.635024  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:46.635089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:46.675302  370051 cri.go:89] found id: ""
	I0229 02:33:46.675334  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.675347  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:46.675355  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:46.675423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:46.717366  370051 cri.go:89] found id: ""
	I0229 02:33:46.717402  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.717413  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:46.717421  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:46.717484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:46.756130  370051 cri.go:89] found id: ""
	I0229 02:33:46.756160  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.756169  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:46.756176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:46.756228  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:46.794283  370051 cri.go:89] found id: ""
	I0229 02:33:46.794312  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.794320  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:46.794328  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:46.794384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:46.836646  370051 cri.go:89] found id: ""
	I0229 02:33:46.836679  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.836691  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:46.836703  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:46.836721  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:46.926532  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:46.926578  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:46.981883  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:46.981915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:47.033571  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:47.033612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:47.049803  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:47.049833  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:47.123389  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:49.623827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:49.638175  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:49.638263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:49.675895  370051 cri.go:89] found id: ""
	I0229 02:33:49.675929  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.675941  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:49.675950  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:49.676009  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:49.720679  370051 cri.go:89] found id: ""
	I0229 02:33:49.720718  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.720730  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:49.720739  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:49.720808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:49.762299  370051 cri.go:89] found id: ""
	I0229 02:33:49.762329  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.762342  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:49.762350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:49.762426  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:49.809330  370051 cri.go:89] found id: ""
	I0229 02:33:49.809364  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.809376  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:49.809391  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:49.809455  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:49.859176  370051 cri.go:89] found id: ""
	I0229 02:33:49.859206  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.859218  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:49.859226  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:49.859292  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:49.914844  370051 cri.go:89] found id: ""
	I0229 02:33:49.914877  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.914890  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:49.914897  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:49.914967  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:49.969640  370051 cri.go:89] found id: ""
	I0229 02:33:49.969667  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.969676  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:49.969682  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:49.969736  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:50.010924  370051 cri.go:89] found id: ""
	I0229 02:33:50.010953  370051 logs.go:276] 0 containers: []
	W0229 02:33:50.010965  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:50.010976  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:50.011002  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:50.089462  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:50.089494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:50.132098  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:50.132129  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:50.182693  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:50.182737  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:50.198209  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:50.198256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:50.281521  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:52.781677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:52.795962  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:52.796055  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:52.833670  370051 cri.go:89] found id: ""
	I0229 02:33:52.833706  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.833718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:52.833728  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:52.833795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:52.889497  370051 cri.go:89] found id: ""
	I0229 02:33:52.889529  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.889539  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:52.889547  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:52.889616  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:52.952880  370051 cri.go:89] found id: ""
	I0229 02:33:52.952915  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.952927  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:52.952935  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:52.953002  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:53.008380  370051 cri.go:89] found id: ""
	I0229 02:33:53.008409  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.008420  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:53.008434  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:53.008502  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:53.047877  370051 cri.go:89] found id: ""
	I0229 02:33:53.047911  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.047922  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:53.047931  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:53.047999  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:53.086080  370051 cri.go:89] found id: ""
	I0229 02:33:53.086107  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.086118  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:53.086127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:53.086193  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:53.128334  370051 cri.go:89] found id: ""
	I0229 02:33:53.128368  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.128378  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:53.128385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:53.128457  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:53.172201  370051 cri.go:89] found id: ""
	I0229 02:33:53.172232  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.172245  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:53.172258  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:53.172275  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:53.222608  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:53.222648  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:53.239888  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:53.239918  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:53.315827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:53.315850  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:53.315864  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:53.395457  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:53.395498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:55.943418  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:55.960562  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:55.960638  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:56.005181  370051 cri.go:89] found id: ""
	I0229 02:33:56.005210  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.005221  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:56.005229  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:56.005293  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:56.046700  370051 cri.go:89] found id: ""
	I0229 02:33:56.046731  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.046743  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:56.046750  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:56.046814  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:56.088459  370051 cri.go:89] found id: ""
	I0229 02:33:56.088486  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.088497  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:56.088505  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:56.088571  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:56.127729  370051 cri.go:89] found id: ""
	I0229 02:33:56.127762  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.127774  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:56.127783  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:56.127862  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:56.169980  370051 cri.go:89] found id: ""
	I0229 02:33:56.170011  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.170022  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:56.170030  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:56.170098  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:56.210650  370051 cri.go:89] found id: ""
	I0229 02:33:56.210682  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.210694  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:56.210704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:56.210771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:56.247342  370051 cri.go:89] found id: ""
	I0229 02:33:56.247380  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.247391  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:56.247400  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:56.247474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:56.286322  370051 cri.go:89] found id: ""
	I0229 02:33:56.286353  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.286364  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:56.286375  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:56.286393  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:56.335144  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:56.335184  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:56.351322  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:56.351359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:56.424251  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:56.424282  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:56.424299  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:56.506053  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:56.506082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.052805  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:59.067508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:59.067599  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:59.114213  370051 cri.go:89] found id: ""
	I0229 02:33:59.114256  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.114268  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:59.114276  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:59.114327  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:59.161087  370051 cri.go:89] found id: ""
	I0229 02:33:59.161123  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.161136  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:59.161145  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:59.161217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:59.206071  370051 cri.go:89] found id: ""
	I0229 02:33:59.206101  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.206114  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:59.206122  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:59.206196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:59.245152  370051 cri.go:89] found id: ""
	I0229 02:33:59.245179  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.245188  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:59.245194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:59.245247  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:59.286047  370051 cri.go:89] found id: ""
	I0229 02:33:59.286080  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.286092  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:59.286101  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:59.286165  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:59.323171  370051 cri.go:89] found id: ""
	I0229 02:33:59.323203  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.323214  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:59.323222  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:59.323288  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:59.364434  370051 cri.go:89] found id: ""
	I0229 02:33:59.364464  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.364477  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:59.364485  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:59.364554  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:59.405902  370051 cri.go:89] found id: ""
	I0229 02:33:59.405929  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.405938  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:59.405948  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:59.405980  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:59.481810  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:59.481841  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:59.481858  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:59.575726  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:59.575767  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.634808  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:59.634849  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:59.702513  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:59.702552  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:02.219660  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:02.234037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:02.234105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:02.277956  370051 cri.go:89] found id: ""
	I0229 02:34:02.277982  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.277991  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:02.277998  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:02.278071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:02.322832  370051 cri.go:89] found id: ""
	I0229 02:34:02.322856  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.322869  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:02.322878  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:02.322949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:02.368612  370051 cri.go:89] found id: ""
	I0229 02:34:02.368646  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.368659  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:02.368668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:02.368731  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:02.412436  370051 cri.go:89] found id: ""
	I0229 02:34:02.412466  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.412479  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:02.412486  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:02.412544  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:02.448682  370051 cri.go:89] found id: ""
	I0229 02:34:02.448713  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.448724  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:02.448733  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:02.448803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:02.486676  370051 cri.go:89] found id: ""
	I0229 02:34:02.486705  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.486723  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:02.486730  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:02.486795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:02.531814  370051 cri.go:89] found id: ""
	I0229 02:34:02.531841  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.531852  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:02.531860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:02.531934  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:02.569800  370051 cri.go:89] found id: ""
	I0229 02:34:02.569835  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.569845  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:02.569857  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:02.569871  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:02.623903  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:02.623937  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:02.643856  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:02.643884  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:02.735520  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:02.735544  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:02.735563  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:02.816572  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:02.816612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.371459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:05.385179  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:05.385255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:05.424653  370051 cri.go:89] found id: ""
	I0229 02:34:05.424679  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.424687  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:05.424694  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:05.424752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:05.463726  370051 cri.go:89] found id: ""
	I0229 02:34:05.463754  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.463763  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:05.463769  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:05.463823  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:05.510367  370051 cri.go:89] found id: ""
	I0229 02:34:05.510396  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.510407  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:05.510415  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:05.510480  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:05.548421  370051 cri.go:89] found id: ""
	I0229 02:34:05.548445  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.548455  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:05.548461  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:05.548527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:05.588778  370051 cri.go:89] found id: ""
	I0229 02:34:05.588801  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.588809  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:05.588815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:05.588875  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:05.638449  370051 cri.go:89] found id: ""
	I0229 02:34:05.638479  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.638490  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:05.638506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:05.638567  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:05.709921  370051 cri.go:89] found id: ""
	I0229 02:34:05.709950  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.709964  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:05.709972  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:05.710038  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:05.756965  370051 cri.go:89] found id: ""
	I0229 02:34:05.756992  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.757000  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:05.757010  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:05.757025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:05.826878  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:05.826904  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:05.826921  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:05.909205  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:05.909256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.954537  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:05.954594  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:06.004157  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:06.004203  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:08.522975  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:08.539247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:08.539326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:08.579776  370051 cri.go:89] found id: ""
	I0229 02:34:08.579806  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.579817  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:08.579826  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:08.579890  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:08.628415  370051 cri.go:89] found id: ""
	I0229 02:34:08.628444  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.628456  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:08.628468  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:08.628534  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:08.690499  370051 cri.go:89] found id: ""
	I0229 02:34:08.690530  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.690540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:08.690547  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:08.690613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:08.739755  370051 cri.go:89] found id: ""
	I0229 02:34:08.739788  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.739801  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:08.739809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:08.739906  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:08.781693  370051 cri.go:89] found id: ""
	I0229 02:34:08.781721  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.781733  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:08.781742  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:08.781808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:08.818605  370051 cri.go:89] found id: ""
	I0229 02:34:08.818637  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.818645  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:08.818652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:08.818713  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:08.861533  370051 cri.go:89] found id: ""
	I0229 02:34:08.861559  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.861569  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:08.861578  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:08.861658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:08.902727  370051 cri.go:89] found id: ""
	I0229 02:34:08.902758  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.902771  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:08.902784  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:08.902801  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:08.948527  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:08.948567  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:08.999883  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:08.999916  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:09.015438  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:09.015467  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:09.087965  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:09.087994  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:09.088010  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:11.671443  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:11.702197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:11.702322  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:11.755104  370051 cri.go:89] found id: ""
	I0229 02:34:11.755136  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.755147  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:11.755153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:11.755204  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:11.794190  370051 cri.go:89] found id: ""
	I0229 02:34:11.794218  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.794239  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:11.794247  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:11.794310  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:11.837330  370051 cri.go:89] found id: ""
	I0229 02:34:11.837360  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.837372  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:11.837380  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:11.837447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:11.876682  370051 cri.go:89] found id: ""
	I0229 02:34:11.876716  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.876726  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:11.876734  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:11.876805  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:11.922172  370051 cri.go:89] found id: ""
	I0229 02:34:11.922239  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.922262  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:11.922271  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:11.922341  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:11.962218  370051 cri.go:89] found id: ""
	I0229 02:34:11.962270  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.962283  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:11.962291  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:11.962375  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:12.002075  370051 cri.go:89] found id: ""
	I0229 02:34:12.002101  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.002110  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:12.002117  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:12.002169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:12.043337  370051 cri.go:89] found id: ""
	I0229 02:34:12.043378  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.043399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:12.043412  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:12.043428  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:12.094458  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:12.094491  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:12.112374  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:12.112401  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:12.193665  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:12.193689  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:12.193717  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:12.282510  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:12.282553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:14.828451  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:14.843626  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:14.843690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:14.884181  370051 cri.go:89] found id: ""
	I0229 02:34:14.884214  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.884226  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:14.884235  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:14.884302  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:14.926312  370051 cri.go:89] found id: ""
	I0229 02:34:14.926347  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.926361  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:14.926369  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:14.926436  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:14.969147  370051 cri.go:89] found id: ""
	I0229 02:34:14.969182  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.969195  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:14.969207  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:14.969277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:15.013000  370051 cri.go:89] found id: ""
	I0229 02:34:15.013045  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.013055  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:15.013064  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:15.013120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:15.055811  370051 cri.go:89] found id: ""
	I0229 02:34:15.055849  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.055861  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:15.055869  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:15.055939  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:15.100736  370051 cri.go:89] found id: ""
	I0229 02:34:15.100768  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.100780  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:15.100789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:15.100867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:15.140115  370051 cri.go:89] found id: ""
	I0229 02:34:15.140151  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.140164  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:15.140172  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:15.140239  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:15.183545  370051 cri.go:89] found id: ""
	I0229 02:34:15.183576  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.183588  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:15.183602  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:15.183621  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:15.258646  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:15.258676  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:15.258693  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:15.347035  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:15.347082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:15.407148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:15.407178  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:15.466695  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:15.466741  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:17.989102  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:18.005052  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:18.005126  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:18.044687  370051 cri.go:89] found id: ""
	I0229 02:34:18.044714  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.044725  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:18.044739  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:18.044815  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:18.085904  370051 cri.go:89] found id: ""
	I0229 02:34:18.085934  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.085944  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:18.085952  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:18.086017  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:18.129958  370051 cri.go:89] found id: ""
	I0229 02:34:18.129985  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.129994  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:18.129999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:18.130052  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:18.166942  370051 cri.go:89] found id: ""
	I0229 02:34:18.166979  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.166991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:18.167000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:18.167056  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:18.205297  370051 cri.go:89] found id: ""
	I0229 02:34:18.205324  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.205331  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:18.205337  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:18.205410  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:18.246415  370051 cri.go:89] found id: ""
	I0229 02:34:18.246448  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.246461  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:18.246469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:18.246527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:18.285534  370051 cri.go:89] found id: ""
	I0229 02:34:18.285573  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.285585  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:18.285600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:18.285662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:18.327624  370051 cri.go:89] found id: ""
	I0229 02:34:18.327651  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.327659  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:18.327670  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:18.327684  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:18.383307  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:18.383351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:18.408127  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:18.408162  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:18.502036  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:18.502070  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:18.502093  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:18.582289  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:18.582340  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:21.135649  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:21.149411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:21.149498  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:21.198246  370051 cri.go:89] found id: ""
	I0229 02:34:21.198286  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.198298  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:21.198306  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:21.198378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:21.240168  370051 cri.go:89] found id: ""
	I0229 02:34:21.240195  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.240203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:21.240209  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:21.240275  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:21.281243  370051 cri.go:89] found id: ""
	I0229 02:34:21.281277  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.281288  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:21.281296  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:21.281359  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:21.321573  370051 cri.go:89] found id: ""
	I0229 02:34:21.321609  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.321621  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:21.321629  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:21.321693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:21.375156  370051 cri.go:89] found id: ""
	I0229 02:34:21.375212  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.375226  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:21.375234  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:21.375308  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:21.430450  370051 cri.go:89] found id: ""
	I0229 02:34:21.430487  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.430499  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:21.430508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:21.430576  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:21.475095  370051 cri.go:89] found id: ""
	I0229 02:34:21.475124  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.475135  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:21.475144  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:21.475215  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:21.517378  370051 cri.go:89] found id: ""
	I0229 02:34:21.517403  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.517412  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:21.517424  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:21.517444  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:21.534103  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:21.534147  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:21.608375  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:21.608400  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:21.608412  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:21.691912  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:21.691950  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:21.744366  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:21.744406  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.295384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:24.309456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:24.309539  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:24.370125  370051 cri.go:89] found id: ""
	I0229 02:34:24.370156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.370167  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:24.370175  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:24.370256  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:24.439458  370051 cri.go:89] found id: ""
	I0229 02:34:24.439487  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.439499  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:24.439506  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:24.439639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:24.478070  370051 cri.go:89] found id: ""
	I0229 02:34:24.478105  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.478119  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:24.478127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:24.478194  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:24.517128  370051 cri.go:89] found id: ""
	I0229 02:34:24.517156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.517168  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:24.517176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:24.517243  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:24.555502  370051 cri.go:89] found id: ""
	I0229 02:34:24.555537  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.555549  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:24.555557  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:24.555625  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:24.601261  370051 cri.go:89] found id: ""
	I0229 02:34:24.601295  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.601307  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:24.601315  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:24.601389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:24.639110  370051 cri.go:89] found id: ""
	I0229 02:34:24.639141  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.639153  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:24.639161  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:24.639224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:24.681448  370051 cri.go:89] found id: ""
	I0229 02:34:24.681478  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.681487  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:24.681498  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:24.681517  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.730735  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:24.730775  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:24.746996  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:24.747031  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:24.827581  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:24.827608  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:24.827628  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:24.909551  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:24.909596  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:27.455967  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:27.477411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:27.477487  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:27.523163  370051 cri.go:89] found id: ""
	I0229 02:34:27.523189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.523198  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:27.523203  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:27.523258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:27.562298  370051 cri.go:89] found id: ""
	I0229 02:34:27.562330  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.562343  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:27.562350  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:27.562420  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:27.603506  370051 cri.go:89] found id: ""
	I0229 02:34:27.603532  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.603540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:27.603554  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:27.603619  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:27.646971  370051 cri.go:89] found id: ""
	I0229 02:34:27.647002  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.647014  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:27.647031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:27.647109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:27.685124  370051 cri.go:89] found id: ""
	I0229 02:34:27.685149  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.685160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:27.685169  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:27.685235  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:27.726976  370051 cri.go:89] found id: ""
	I0229 02:34:27.727007  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.727018  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:27.727026  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:27.727089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:27.767159  370051 cri.go:89] found id: ""
	I0229 02:34:27.767189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.767197  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:27.767204  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:27.767272  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:27.810377  370051 cri.go:89] found id: ""
	I0229 02:34:27.810411  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.810420  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:27.810431  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:27.810447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:27.858094  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:27.858136  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:27.874407  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:27.874440  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:27.953065  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:27.953092  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:27.953108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:28.042244  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:28.042278  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:30.588227  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:30.604954  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:30.605037  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:30.642069  370051 cri.go:89] found id: ""
	I0229 02:34:30.642100  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.642108  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:30.642119  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:30.642187  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:30.686212  370051 cri.go:89] found id: ""
	I0229 02:34:30.686264  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.686277  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:30.686285  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:30.686364  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:30.726668  370051 cri.go:89] found id: ""
	I0229 02:34:30.726702  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.726715  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:30.726723  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:30.726788  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:30.766850  370051 cri.go:89] found id: ""
	I0229 02:34:30.766883  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.766895  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:30.766904  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:30.766979  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:30.808972  370051 cri.go:89] found id: ""
	I0229 02:34:30.809002  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.809015  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:30.809023  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:30.809093  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:30.851992  370051 cri.go:89] found id: ""
	I0229 02:34:30.852016  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.852025  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:30.852031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:30.852096  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:30.891100  370051 cri.go:89] found id: ""
	I0229 02:34:30.891132  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.891144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:30.891157  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:30.891227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:30.931740  370051 cri.go:89] found id: ""
	I0229 02:34:30.931768  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.931777  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:30.931787  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:30.931808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:31.010896  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:31.010919  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:31.010936  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:31.094626  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:31.094662  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:31.150765  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:31.150804  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:31.202932  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:31.202976  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:33.723355  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:33.738651  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:33.738753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:33.778255  370051 cri.go:89] found id: ""
	I0229 02:34:33.778287  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.778299  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:33.778307  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:33.778384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:33.818360  370051 cri.go:89] found id: ""
	I0229 02:34:33.818396  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.818406  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:33.818412  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:33.818564  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:33.866781  370051 cri.go:89] found id: ""
	I0229 02:34:33.866814  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.866824  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:33.866831  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:33.866891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:33.910013  370051 cri.go:89] found id: ""
	I0229 02:34:33.910051  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.910063  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:33.910072  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:33.910146  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:33.956068  370051 cri.go:89] found id: ""
	I0229 02:34:33.956098  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.956106  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:33.956113  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:33.956170  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:34.004997  370051 cri.go:89] found id: ""
	I0229 02:34:34.005027  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.005038  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:34.005047  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:34.005113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:34.059266  370051 cri.go:89] found id: ""
	I0229 02:34:34.059293  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.059302  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:34.059307  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:34.059363  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:34.105601  370051 cri.go:89] found id: ""
	I0229 02:34:34.105631  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.105643  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:34.105654  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:34.105669  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:34.208723  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:34.208764  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:34.262105  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:34.262137  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:34.314528  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:34.314571  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:34.332441  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:34.332477  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:34.406303  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:36.906814  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:36.922297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:36.922377  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:36.967550  370051 cri.go:89] found id: ""
	I0229 02:34:36.967578  370051 logs.go:276] 0 containers: []
	W0229 02:34:36.967589  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:36.967599  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:36.967662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:37.007589  370051 cri.go:89] found id: ""
	I0229 02:34:37.007614  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.007624  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:37.007632  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:37.007706  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:37.048230  370051 cri.go:89] found id: ""
	I0229 02:34:37.048260  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.048273  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:37.048281  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:37.048354  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:37.089329  370051 cri.go:89] found id: ""
	I0229 02:34:37.089355  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.089365  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:37.089373  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:37.089441  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:37.144654  370051 cri.go:89] found id: ""
	I0229 02:34:37.144687  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.144699  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:37.144708  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:37.144778  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:37.203822  370051 cri.go:89] found id: ""
	I0229 02:34:37.203857  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.203868  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:37.203876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:37.203948  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:37.250369  370051 cri.go:89] found id: ""
	I0229 02:34:37.250398  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.250410  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:37.250417  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:37.250490  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:37.290924  370051 cri.go:89] found id: ""
	I0229 02:34:37.290957  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.290969  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:37.290981  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:37.290995  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:37.343878  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:37.343920  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:37.359307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:37.359336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:37.435264  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:37.435292  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:37.435309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:37.518274  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:37.518309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:40.062232  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:40.079883  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:40.079957  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:40.123826  370051 cri.go:89] found id: ""
	I0229 02:34:40.123856  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.123866  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:40.123874  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:40.123943  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:40.190273  370051 cri.go:89] found id: ""
	I0229 02:34:40.190321  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.190332  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:40.190338  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:40.190395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:40.232921  370051 cri.go:89] found id: ""
	I0229 02:34:40.232949  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.232961  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:40.232968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:40.233034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:40.273490  370051 cri.go:89] found id: ""
	I0229 02:34:40.273517  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.273526  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:40.273538  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:40.273594  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:40.317121  370051 cri.go:89] found id: ""
	I0229 02:34:40.317152  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.317163  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:40.317171  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:40.317230  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:40.363347  370051 cri.go:89] found id: ""
	I0229 02:34:40.363380  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.363389  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:40.363396  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:40.363459  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:40.407187  370051 cri.go:89] found id: ""
	I0229 02:34:40.407213  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.407222  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:40.407231  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:40.407282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:40.447185  370051 cri.go:89] found id: ""
	I0229 02:34:40.447218  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.447229  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:40.447242  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:40.447258  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:40.496998  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:40.497029  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:40.512520  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:40.512549  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:40.589150  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:40.589173  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:40.589190  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:40.677054  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:40.677096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:43.222265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:43.236567  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:43.236629  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:43.282917  370051 cri.go:89] found id: ""
	I0229 02:34:43.282959  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.282976  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:43.282982  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:43.283049  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:43.329273  370051 cri.go:89] found id: ""
	I0229 02:34:43.329302  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.329313  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:43.329321  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:43.329386  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:43.366696  370051 cri.go:89] found id: ""
	I0229 02:34:43.366723  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.366732  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:43.366739  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:43.366800  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:43.405793  370051 cri.go:89] found id: ""
	I0229 02:34:43.405820  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.405828  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:43.405834  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:43.405888  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:43.442870  370051 cri.go:89] found id: ""
	I0229 02:34:43.442898  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.442906  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:43.442912  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:43.442964  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:43.484581  370051 cri.go:89] found id: ""
	I0229 02:34:43.484615  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.484626  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:43.484635  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:43.484702  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:43.530931  370051 cri.go:89] found id: ""
	I0229 02:34:43.530954  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.530963  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:43.530968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:43.531024  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:43.572810  370051 cri.go:89] found id: ""
	I0229 02:34:43.572838  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.572850  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:43.572867  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:43.572883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:43.622815  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:43.622854  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:43.637972  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:43.638012  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:43.713704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:43.713728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:43.713746  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:43.797178  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:43.797220  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:46.347159  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:46.361601  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:46.361682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:46.399751  370051 cri.go:89] found id: ""
	I0229 02:34:46.399784  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.399795  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:46.399804  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:46.399870  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:46.445367  370051 cri.go:89] found id: ""
	I0229 02:34:46.445398  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.445407  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:46.445413  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:46.445486  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:46.490323  370051 cri.go:89] found id: ""
	I0229 02:34:46.490363  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.490385  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:46.490393  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:46.490473  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:46.531406  370051 cri.go:89] found id: ""
	I0229 02:34:46.531441  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.531450  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:46.531456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:46.531507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:46.572759  370051 cri.go:89] found id: ""
	I0229 02:34:46.572787  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.572795  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:46.572804  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:46.572908  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:46.613055  370051 cri.go:89] found id: ""
	I0229 02:34:46.613083  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.613093  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:46.613099  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:46.613153  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:46.657504  370051 cri.go:89] found id: ""
	I0229 02:34:46.657536  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.657544  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:46.657550  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:46.657605  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:46.698008  370051 cri.go:89] found id: ""
	I0229 02:34:46.698057  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.698068  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:46.698080  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:46.698097  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:46.746648  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:46.746682  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:46.761190  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:46.761219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:46.843379  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:46.843403  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:46.843415  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:46.933493  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:46.933546  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:49.491837  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:49.508647  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:49.508717  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:49.550752  370051 cri.go:89] found id: ""
	I0229 02:34:49.550788  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.550800  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:49.550809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:49.550883  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:49.597623  370051 cri.go:89] found id: ""
	I0229 02:34:49.597663  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.597675  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:49.597683  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:49.597764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:49.635207  370051 cri.go:89] found id: ""
	I0229 02:34:49.635230  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.635238  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:49.635282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:49.635336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:49.674664  370051 cri.go:89] found id: ""
	I0229 02:34:49.674696  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.674708  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:49.674716  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:49.674777  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:49.715391  370051 cri.go:89] found id: ""
	I0229 02:34:49.715420  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.715433  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:49.715442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:49.715497  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:49.753318  370051 cri.go:89] found id: ""
	I0229 02:34:49.753352  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.753373  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:49.753382  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:49.753451  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:49.791342  370051 cri.go:89] found id: ""
	I0229 02:34:49.791369  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.791377  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:49.791384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:49.791456  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:49.838148  370051 cri.go:89] found id: ""
	I0229 02:34:49.838181  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.838191  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:49.838204  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:49.838244  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:49.891532  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:49.891568  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:49.917625  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:49.917664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:50.019436  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:50.019457  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:50.019472  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:50.108302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:50.108349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:52.654561  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:52.668331  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:52.668402  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:52.718431  370051 cri.go:89] found id: ""
	I0229 02:34:52.718471  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.718484  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:52.718493  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:52.718551  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:52.757913  370051 cri.go:89] found id: ""
	I0229 02:34:52.757946  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.757957  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:52.757965  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:52.758035  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:52.796792  370051 cri.go:89] found id: ""
	I0229 02:34:52.796821  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.796833  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:52.796842  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:52.796913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:52.832157  370051 cri.go:89] found id: ""
	I0229 02:34:52.832187  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.832196  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:52.832203  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:52.832264  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:52.879170  370051 cri.go:89] found id: ""
	I0229 02:34:52.879197  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.879206  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:52.879212  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:52.879265  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:52.924219  370051 cri.go:89] found id: ""
	I0229 02:34:52.924249  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.924258  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:52.924264  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:52.924318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:52.980422  370051 cri.go:89] found id: ""
	I0229 02:34:52.980450  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.980457  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:52.980463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:52.980525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:53.026393  370051 cri.go:89] found id: ""
	I0229 02:34:53.026418  370051 logs.go:276] 0 containers: []
	W0229 02:34:53.026426  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:53.026436  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:53.026453  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:53.075135  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:53.075174  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:53.092197  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:53.092223  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:53.164397  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:53.164423  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:53.164439  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:53.250310  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:53.250366  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:55.792993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:55.807152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:55.807229  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:55.867791  370051 cri.go:89] found id: ""
	I0229 02:34:55.867821  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.867830  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:55.867847  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:55.867925  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:55.922960  370051 cri.go:89] found id: ""
	I0229 02:34:55.922989  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.923001  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:55.923009  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:55.923076  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:55.972510  370051 cri.go:89] found id: ""
	I0229 02:34:55.972541  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.972552  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:55.972560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:55.972632  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:56.011948  370051 cri.go:89] found id: ""
	I0229 02:34:56.011980  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.011990  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:56.011999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:56.012077  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:56.052624  370051 cri.go:89] found id: ""
	I0229 02:34:56.052653  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.052662  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:56.052668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:56.052722  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:56.089075  370051 cri.go:89] found id: ""
	I0229 02:34:56.089100  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.089108  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:56.089114  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:56.089180  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:56.130369  370051 cri.go:89] found id: ""
	I0229 02:34:56.130403  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.130416  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:56.130424  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:56.130496  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:56.177812  370051 cri.go:89] found id: ""
	I0229 02:34:56.177843  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.177854  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:56.177875  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:56.177894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:56.224294  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:56.224336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:56.275874  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:56.275909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:56.291172  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:56.291202  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:56.364839  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:56.364870  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:56.364888  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:58.950871  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:58.966327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:58.966389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:59.005914  370051 cri.go:89] found id: ""
	I0229 02:34:59.005952  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.005968  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:59.005976  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:59.006045  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:59.043962  370051 cri.go:89] found id: ""
	I0229 02:34:59.043993  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.044005  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:59.044013  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:59.044167  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:59.089398  370051 cri.go:89] found id: ""
	I0229 02:34:59.089426  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.089434  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:59.089440  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:59.089491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:59.130786  370051 cri.go:89] found id: ""
	I0229 02:34:59.130815  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.130824  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:59.130830  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:59.130909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:59.174807  370051 cri.go:89] found id: ""
	I0229 02:34:59.174836  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.174848  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:59.174855  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:59.174929  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:59.217745  370051 cri.go:89] found id: ""
	I0229 02:34:59.217792  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.217800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:59.217806  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:59.217858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:59.260906  370051 cri.go:89] found id: ""
	I0229 02:34:59.260939  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.260950  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:59.260957  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:59.261025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:59.299114  370051 cri.go:89] found id: ""
	I0229 02:34:59.299140  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.299150  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:59.299161  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:59.299173  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:59.349630  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:59.349672  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:59.365679  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:59.365710  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:59.438234  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:59.438261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:59.438280  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:59.524185  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:59.524219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:02.068320  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:02.082910  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:02.082988  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:02.122095  370051 cri.go:89] found id: ""
	I0229 02:35:02.122132  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.122145  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:02.122153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:02.122245  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:02.160982  370051 cri.go:89] found id: ""
	I0229 02:35:02.161013  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.161029  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:02.161043  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:02.161108  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:02.200603  370051 cri.go:89] found id: ""
	I0229 02:35:02.200637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.200650  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:02.200658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:02.200746  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:02.243100  370051 cri.go:89] found id: ""
	I0229 02:35:02.243126  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.243134  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:02.243140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:02.243207  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:02.282758  370051 cri.go:89] found id: ""
	I0229 02:35:02.282793  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.282806  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:02.282815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:02.282884  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:02.324402  370051 cri.go:89] found id: ""
	I0229 02:35:02.324434  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.324444  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:02.324455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:02.324520  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:02.368608  370051 cri.go:89] found id: ""
	I0229 02:35:02.368637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.368650  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:02.368658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:02.368726  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:02.411449  370051 cri.go:89] found id: ""
	I0229 02:35:02.411484  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.411497  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:02.411509  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:02.411526  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:02.427942  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:02.427974  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:02.498848  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:02.498884  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:02.498902  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:02.585701  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:02.585749  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:02.642055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:02.642096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.201769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:05.215944  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:05.216020  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:05.254080  370051 cri.go:89] found id: ""
	I0229 02:35:05.254107  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.254121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:05.254128  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:05.254179  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:05.296990  370051 cri.go:89] found id: ""
	I0229 02:35:05.297022  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.297034  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:05.297042  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:05.297111  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:05.336241  370051 cri.go:89] found id: ""
	I0229 02:35:05.336275  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.336290  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:05.336299  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:05.336395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:05.377620  370051 cri.go:89] found id: ""
	I0229 02:35:05.377649  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.377658  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:05.377664  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:05.377712  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:05.416275  370051 cri.go:89] found id: ""
	I0229 02:35:05.416303  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.416311  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:05.416318  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:05.416373  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:05.455375  370051 cri.go:89] found id: ""
	I0229 02:35:05.455412  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.455426  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:05.455436  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:05.455507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:05.495862  370051 cri.go:89] found id: ""
	I0229 02:35:05.495887  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.495897  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:05.495905  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:05.495969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:05.541218  370051 cri.go:89] found id: ""
	I0229 02:35:05.541247  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.541260  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:05.541273  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:05.541288  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:05.629982  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:05.630023  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:05.719026  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:05.719066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.785318  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:05.785359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:05.801181  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:05.801214  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:05.871333  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:08.371982  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:08.386451  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:08.386514  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:08.430045  370051 cri.go:89] found id: ""
	I0229 02:35:08.430077  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.430090  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:08.430099  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:08.430169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:08.470547  370051 cri.go:89] found id: ""
	I0229 02:35:08.470583  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.470596  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:08.470604  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:08.470671  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:08.512637  370051 cri.go:89] found id: ""
	I0229 02:35:08.512676  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.512687  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:08.512695  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:08.512759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:08.556228  370051 cri.go:89] found id: ""
	I0229 02:35:08.556263  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.556271  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:08.556277  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:08.556335  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:08.613838  370051 cri.go:89] found id: ""
	I0229 02:35:08.613868  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.613878  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:08.613884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:08.613940  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:08.686408  370051 cri.go:89] found id: ""
	I0229 02:35:08.686442  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.686454  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:08.686462  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:08.686519  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:08.725665  370051 cri.go:89] found id: ""
	I0229 02:35:08.725697  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.725710  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:08.725719  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:08.725776  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:08.765639  370051 cri.go:89] found id: ""
	I0229 02:35:08.765666  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.765674  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:08.765684  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:08.765695  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:08.813097  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:08.813135  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:08.828880  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:08.828909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:08.903237  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:08.903261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:08.903281  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:08.991710  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:08.991745  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:11.536724  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:11.551614  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:11.551690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:11.593078  370051 cri.go:89] found id: ""
	I0229 02:35:11.593110  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.593121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:11.593129  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:11.593185  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:11.645696  370051 cri.go:89] found id: ""
	I0229 02:35:11.645729  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.645742  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:11.645751  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:11.645820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:11.691181  370051 cri.go:89] found id: ""
	I0229 02:35:11.691213  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.691226  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:11.691245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:11.691318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:11.745906  370051 cri.go:89] found id: ""
	I0229 02:35:11.745933  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.745946  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:11.745953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:11.746019  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:11.784895  370051 cri.go:89] found id: ""
	I0229 02:35:11.784927  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.784940  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:11.784949  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:11.785025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:11.825341  370051 cri.go:89] found id: ""
	I0229 02:35:11.825372  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.825384  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:11.825392  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:11.825464  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:11.862454  370051 cri.go:89] found id: ""
	I0229 02:35:11.862492  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.862505  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:11.862523  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:11.862604  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:11.908424  370051 cri.go:89] found id: ""
	I0229 02:35:11.908450  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.908459  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:11.908469  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:11.908487  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:11.956274  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:11.956313  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:11.972363  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:11.972397  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:12.052030  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:12.052057  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:12.052078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:12.138388  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:12.138431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:14.691474  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:14.724652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:14.724739  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:14.765210  370051 cri.go:89] found id: ""
	I0229 02:35:14.765237  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.765246  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:14.765253  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:14.765306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:14.808226  370051 cri.go:89] found id: ""
	I0229 02:35:14.808258  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.808270  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:14.808287  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:14.808357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:14.847999  370051 cri.go:89] found id: ""
	I0229 02:35:14.848030  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.848041  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:14.848049  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:14.848123  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:14.887221  370051 cri.go:89] found id: ""
	I0229 02:35:14.887248  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.887256  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:14.887263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:14.887339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:14.929905  370051 cri.go:89] found id: ""
	I0229 02:35:14.929933  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.929950  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:14.929956  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:14.930011  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:14.969697  370051 cri.go:89] found id: ""
	I0229 02:35:14.969739  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.969761  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:14.969770  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:14.969837  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:15.013387  370051 cri.go:89] found id: ""
	I0229 02:35:15.013418  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.013429  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:15.013437  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:15.013493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:15.058199  370051 cri.go:89] found id: ""
	I0229 02:35:15.058240  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.058253  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:15.058270  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:15.058287  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:15.110165  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:15.110213  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:15.127417  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:15.127452  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:15.203330  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:15.203370  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:15.203405  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:15.283455  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:15.283501  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:17.829187  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:17.844678  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:17.844759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:17.885549  370051 cri.go:89] found id: ""
	I0229 02:35:17.885581  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.885594  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:17.885601  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:17.885670  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:17.925652  370051 cri.go:89] found id: ""
	I0229 02:35:17.925679  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.925691  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:17.925699  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:17.925766  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:17.963172  370051 cri.go:89] found id: ""
	I0229 02:35:17.963203  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.963215  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:17.963224  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:17.963282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:18.003528  370051 cri.go:89] found id: ""
	I0229 02:35:18.003560  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.003572  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:18.003579  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:18.003644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:18.046494  370051 cri.go:89] found id: ""
	I0229 02:35:18.046526  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.046537  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:18.046545  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:18.046613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:18.084963  370051 cri.go:89] found id: ""
	I0229 02:35:18.084993  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.085004  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:18.085013  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:18.085074  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:18.125521  370051 cri.go:89] found id: ""
	I0229 02:35:18.125547  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.125556  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:18.125563  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:18.125623  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:18.169963  370051 cri.go:89] found id: ""
	I0229 02:35:18.169995  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.170006  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:18.170020  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:18.170035  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:18.225414  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:18.225460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:18.242069  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:18.242108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:18.312704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:18.312728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:18.312742  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:18.397206  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:18.397249  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:20.968000  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:20.983115  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:20.983196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:21.025710  370051 cri.go:89] found id: ""
	I0229 02:35:21.025735  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.025743  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:21.025749  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:21.025812  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:21.065825  370051 cri.go:89] found id: ""
	I0229 02:35:21.065854  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.065862  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:21.065868  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:21.065928  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:21.104738  370051 cri.go:89] found id: ""
	I0229 02:35:21.104770  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.104782  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:21.104790  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:21.104871  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:21.147180  370051 cri.go:89] found id: ""
	I0229 02:35:21.147211  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.147221  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:21.147228  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:21.147284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:21.187240  370051 cri.go:89] found id: ""
	I0229 02:35:21.187275  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.187287  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:21.187295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:21.187389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:21.228873  370051 cri.go:89] found id: ""
	I0229 02:35:21.228899  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.228917  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:21.228924  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:21.228992  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:21.268827  370051 cri.go:89] found id: ""
	I0229 02:35:21.268856  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.268867  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:21.268876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:21.268970  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:21.313253  370051 cri.go:89] found id: ""
	I0229 02:35:21.313288  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.313297  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:21.313307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:21.313328  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:21.448089  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:21.448120  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:21.448146  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:21.539941  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:21.539983  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:21.590148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:21.590186  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:21.647760  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:21.647797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:24.165842  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:24.183263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:24.183345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:24.233173  370051 cri.go:89] found id: ""
	I0229 02:35:24.233208  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.233219  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:24.233228  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:24.233301  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:24.276937  370051 cri.go:89] found id: ""
	I0229 02:35:24.276977  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.276989  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:24.276998  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:24.277066  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:24.314629  370051 cri.go:89] found id: ""
	I0229 02:35:24.314665  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.314678  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:24.314686  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:24.314753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:24.367585  370051 cri.go:89] found id: ""
	I0229 02:35:24.367618  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.367630  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:24.367639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:24.367709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:24.451128  370051 cri.go:89] found id: ""
	I0229 02:35:24.451151  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.451160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:24.451167  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:24.451258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:24.497302  370051 cri.go:89] found id: ""
	I0229 02:35:24.497336  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.497348  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:24.497357  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:24.497431  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:24.544593  370051 cri.go:89] found id: ""
	I0229 02:35:24.544621  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.544632  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:24.544640  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:24.544714  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:24.584570  370051 cri.go:89] found id: ""
	I0229 02:35:24.584601  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.584613  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:24.584626  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:24.584645  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:24.669019  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:24.669044  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:24.669061  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:24.752163  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:24.752205  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:24.811945  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:24.811985  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:24.874832  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:24.874873  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:27.392846  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:27.419255  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:27.419339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:27.465294  370051 cri.go:89] found id: ""
	I0229 02:35:27.465325  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.465337  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:27.465345  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:27.465417  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:27.533393  370051 cri.go:89] found id: ""
	I0229 02:35:27.533424  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.533433  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:27.533441  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:27.533510  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:27.587195  370051 cri.go:89] found id: ""
	I0229 02:35:27.587221  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.587232  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:27.587240  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:27.587313  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:27.638597  370051 cri.go:89] found id: ""
	I0229 02:35:27.638624  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.638632  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:27.638639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:27.638709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:27.687695  370051 cri.go:89] found id: ""
	I0229 02:35:27.687730  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.687742  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:27.687750  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:27.687825  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:27.732275  370051 cri.go:89] found id: ""
	I0229 02:35:27.732309  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.732320  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:27.732327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:27.732389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:27.783069  370051 cri.go:89] found id: ""
	I0229 02:35:27.783109  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.783122  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:27.783133  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:27.783224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:27.832385  370051 cri.go:89] found id: ""
	I0229 02:35:27.832416  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.832429  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:27.832443  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:27.832460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:27.902610  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:27.902658  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:27.919900  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:27.919947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:28.003313  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:28.003337  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:28.003356  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:28.100814  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:28.100853  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:30.654289  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:30.683056  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:30.683141  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:30.734678  370051 cri.go:89] found id: ""
	I0229 02:35:30.734704  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.734712  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:30.734719  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:30.734771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:30.780792  370051 cri.go:89] found id: ""
	I0229 02:35:30.780821  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.780830  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:30.780837  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:30.780904  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:30.827244  370051 cri.go:89] found id: ""
	I0229 02:35:30.827269  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.827278  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:30.827285  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:30.827336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:30.871305  370051 cri.go:89] found id: ""
	I0229 02:35:30.871333  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.871342  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:30.871348  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:30.871423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:30.910095  370051 cri.go:89] found id: ""
	I0229 02:35:30.910121  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.910130  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:30.910136  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:30.910188  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:30.955234  370051 cri.go:89] found id: ""
	I0229 02:35:30.955261  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.955271  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:30.955278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:30.955345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:30.996555  370051 cri.go:89] found id: ""
	I0229 02:35:30.996589  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.996602  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:30.996611  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:30.996687  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:31.036424  370051 cri.go:89] found id: ""
	I0229 02:35:31.036454  370051 logs.go:276] 0 containers: []
	W0229 02:35:31.036464  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:31.036474  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:31.036488  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:31.107928  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:31.107987  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:31.125268  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:31.125303  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:31.217691  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:31.217717  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:31.217740  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:31.313847  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:31.313883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:33.861648  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:33.876887  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:33.876954  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:33.921545  370051 cri.go:89] found id: ""
	I0229 02:35:33.921577  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.921588  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:33.921597  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:33.921658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:33.972558  370051 cri.go:89] found id: ""
	I0229 02:35:33.972584  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.972592  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:33.972599  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:33.972662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:34.020821  370051 cri.go:89] found id: ""
	I0229 02:35:34.020852  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.020862  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:34.020873  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:34.020937  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:34.064076  370051 cri.go:89] found id: ""
	I0229 02:35:34.064110  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.064121  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:34.064129  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:34.064191  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:34.108523  370051 cri.go:89] found id: ""
	I0229 02:35:34.108557  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.108568  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:34.108576  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:34.108639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:34.149444  370051 cri.go:89] found id: ""
	I0229 02:35:34.149468  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.149478  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:34.149487  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:34.149562  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:34.193780  370051 cri.go:89] found id: ""
	I0229 02:35:34.193805  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.193814  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:34.193820  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:34.193913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:34.237088  370051 cri.go:89] found id: ""
	I0229 02:35:34.237118  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.237127  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:34.237137  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:34.237151  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:34.281055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:34.281091  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:34.333886  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:34.333925  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:34.353163  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:34.353204  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:34.465925  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:34.465951  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:34.465969  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:37.049957  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:37.064297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:37.064384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:37.105669  370051 cri.go:89] found id: ""
	I0229 02:35:37.105703  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.105711  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:37.105720  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:37.105790  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:37.143753  370051 cri.go:89] found id: ""
	I0229 02:35:37.143788  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.143799  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:37.143808  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:37.143880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:37.180126  370051 cri.go:89] found id: ""
	I0229 02:35:37.180157  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.180166  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:37.180173  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:37.180227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:37.221135  370051 cri.go:89] found id: ""
	I0229 02:35:37.221173  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.221185  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:37.221193  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:37.221261  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:37.258888  370051 cri.go:89] found id: ""
	I0229 02:35:37.258920  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.258932  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:37.258940  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:37.259005  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:37.300970  370051 cri.go:89] found id: ""
	I0229 02:35:37.300998  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.301010  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:37.301018  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:37.301105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:37.349797  370051 cri.go:89] found id: ""
	I0229 02:35:37.349829  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.349841  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:37.349850  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:37.349916  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:37.408726  370051 cri.go:89] found id: ""
	I0229 02:35:37.408762  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.408773  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:37.408787  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:37.408805  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:37.462030  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:37.462064  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:37.477836  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:37.477868  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:37.553886  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:37.553924  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:37.553941  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:37.644637  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:37.644683  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:40.197937  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:40.212830  370051 kubeadm.go:640] restartCluster took 4m14.648338345s
	W0229 02:35:40.212984  370051 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 02:35:40.213021  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:35:40.673169  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:40.690108  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:35:40.702424  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:35:40.713782  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:35:40.713832  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:35:40.775345  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:35:40.775527  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:35:40.929045  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:35:40.929185  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:35:40.929310  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:35:41.154311  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:35:41.154449  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:35:41.162905  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:35:41.317651  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:35:41.319260  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:35:41.319358  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:35:41.319458  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:35:41.319564  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:35:41.319675  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:35:41.319772  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:35:41.319857  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:35:41.319963  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:35:41.320066  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:35:41.320166  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:35:41.320289  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:35:41.320357  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:35:41.320439  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:35:41.457291  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:35:41.599703  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:35:41.766344  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:35:41.939397  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:35:41.940740  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:35:41.942544  370051 out.go:204]   - Booting up control plane ...
	I0229 02:35:41.942656  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:35:41.946949  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:35:41.949540  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:35:41.950426  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:35:41.953310  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:36:21.953781  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:36:21.954431  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:21.954685  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:26.954801  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:26.955093  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:36.955344  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:36.955543  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:56.957911  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:56.958178  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:36.959509  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:37:36.959795  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:36.959812  370051 kubeadm.go:322] 
	I0229 02:37:36.959848  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:37:36.959887  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:37:36.959893  370051 kubeadm.go:322] 
	I0229 02:37:36.959937  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:37:36.959991  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:37:36.960142  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:37:36.960167  370051 kubeadm.go:322] 
	I0229 02:37:36.960282  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:37:36.960318  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:37:36.960362  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:37:36.960371  370051 kubeadm.go:322] 
	I0229 02:37:36.960482  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:37:36.960617  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:37:36.960756  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:37:36.960839  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:37:36.960951  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:37:36.961015  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:37:36.961366  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:37:36.961507  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:37:36.961616  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 02:37:36.961763  370051 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:37:36.961835  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:37:37.427665  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:37:37.443045  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:37:37.456937  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:37:37.456979  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:37:37.529093  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:37:37.529246  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:37:37.670260  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:37:37.670417  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:37:37.670548  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:37:37.904220  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:37:37.905569  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:37:37.914919  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:37:38.070911  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:37:38.072738  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:37:38.072860  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:37:38.072951  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:37:38.073049  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:37:38.073132  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:37:38.073230  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:37:38.073299  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:37:38.073376  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:37:38.073458  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:37:38.073566  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:37:38.073680  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:37:38.073720  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:37:38.073794  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:37:38.209805  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:37:38.305550  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:37:38.464715  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:37:38.623139  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:37:38.624364  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:37:38.625883  370051 out.go:204]   - Booting up control plane ...
	I0229 02:37:38.626039  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:37:38.630668  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:37:38.631740  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:37:38.632687  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:37:38.636043  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:38:18.637746  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:38:18.638616  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:18.638883  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:23.639374  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:23.639613  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:33.640169  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:33.640468  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:53.640871  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:53.641147  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:39:33.642813  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:39:33.643083  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:39:33.643099  370051 kubeadm.go:322] 
	I0229 02:39:33.643153  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:39:33.643206  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:39:33.643213  370051 kubeadm.go:322] 
	I0229 02:39:33.643252  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:39:33.643296  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:39:33.643443  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:39:33.643455  370051 kubeadm.go:322] 
	I0229 02:39:33.643605  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:39:33.643655  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:39:33.643700  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:39:33.643714  370051 kubeadm.go:322] 
	I0229 02:39:33.643871  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:39:33.644040  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:39:33.644193  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:39:33.644272  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:39:33.644371  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:39:33.644412  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:39:33.644855  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:39:33.644972  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:39:33.645065  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:39:33.645132  370051 kubeadm.go:406] StartCluster complete in 8m8.138449101s
	I0229 02:39:33.645178  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:39:33.645255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:39:33.699121  370051 cri.go:89] found id: ""
	I0229 02:39:33.699154  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.699166  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:39:33.699174  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:39:33.699240  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:39:33.747229  370051 cri.go:89] found id: ""
	I0229 02:39:33.747260  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.747272  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:39:33.747279  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:39:33.747349  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:39:33.789303  370051 cri.go:89] found id: ""
	I0229 02:39:33.789334  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.789343  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:39:33.789350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:39:33.789413  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:39:33.832769  370051 cri.go:89] found id: ""
	I0229 02:39:33.832801  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.832814  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:39:33.832824  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:39:33.832891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:39:33.881508  370051 cri.go:89] found id: ""
	I0229 02:39:33.881543  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.881554  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:39:33.881571  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:39:33.881635  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:39:33.941691  370051 cri.go:89] found id: ""
	I0229 02:39:33.941728  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.941740  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:39:33.941749  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:39:33.941822  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:39:33.990639  370051 cri.go:89] found id: ""
	I0229 02:39:33.990681  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.990704  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:39:33.990713  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:39:33.990774  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:39:34.038426  370051 cri.go:89] found id: ""
	I0229 02:39:34.038460  370051 logs.go:276] 0 containers: []
	W0229 02:39:34.038470  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:39:34.038480  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:39:34.038497  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:39:34.054571  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:39:34.054604  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:39:34.131297  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:39:34.131323  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:39:34.131337  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:39:34.232302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:39:34.232349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:39:34.283314  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:39:34.283351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:39:34.336858  370051 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:39:34.336920  370051 out.go:239] * 
	* 
	W0229 02:39:34.336985  370051 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.337006  370051 out.go:239] * 
	* 
	W0229 02:39:34.337787  370051 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:39:34.340744  370051 out.go:177] 
	W0229 02:39:34.342096  370051 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.342137  370051 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:39:34.342160  370051 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:39:34.343540  370051 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-275488 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488: exit status 2 (275.586793ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-275488 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-275488 logs -n 25: (1.673744325s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-117441 sudo cat                              | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo find                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo crio                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-117441                                       | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	| delete  | -p                                                     | disable-driver-mounts-542968 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | disable-driver-mounts-542968                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:23 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-915633            | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247751             | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071485  | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275488        | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-915633                 | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247751                  | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071485       | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275488             | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:26:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:26:36.132854  370051 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:26:36.133389  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133407  370051 out.go:304] Setting ErrFile to fd 2...
	I0229 02:26:36.133414  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133912  370051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:26:36.134959  370051 out.go:298] Setting JSON to false
	I0229 02:26:36.135907  370051 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7739,"bootTime":1709165857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:26:36.135982  370051 start.go:139] virtualization: kvm guest
	I0229 02:26:36.137916  370051 out.go:177] * [old-k8s-version-275488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:26:36.139510  370051 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:26:36.139543  370051 notify.go:220] Checking for updates...
	I0229 02:26:36.141206  370051 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:26:36.142776  370051 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:26:36.143982  370051 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:26:36.145097  370051 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:26:36.146170  370051 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:26:36.147751  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:26:36.148198  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.148298  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.163969  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0229 02:26:36.164373  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.164977  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.165003  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.165394  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.165584  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.167312  370051 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 02:26:36.168337  370051 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:26:36.168641  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.168683  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.184089  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0229 02:26:36.184605  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.185181  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.185210  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.185551  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.185723  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.222261  370051 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 02:26:36.223363  370051 start.go:299] selected driver: kvm2
	I0229 02:26:36.223374  370051 start.go:903] validating driver "kvm2" against &{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.223487  370051 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:26:36.224130  370051 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.224195  370051 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:26:36.239302  370051 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:26:36.239664  370051 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:26:36.239741  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:26:36.239755  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:26:36.239765  370051 start_flags.go:323] config:
	{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.239908  370051 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.241466  370051 out.go:177] * Starting control plane node old-k8s-version-275488 in cluster old-k8s-version-275488
	I0229 02:26:35.666509  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:38.738602  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:36.242536  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:26:36.242564  370051 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 02:26:36.242573  370051 cache.go:56] Caching tarball of preloaded images
	I0229 02:26:36.242641  370051 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 02:26:36.242651  370051 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 02:26:36.242742  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:26:36.242905  370051 start.go:365] acquiring machines lock for old-k8s-version-275488: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:26:44.818494  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:47.890482  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:53.970508  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:57.042448  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:03.122506  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:06.194415  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:12.274520  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:15.346558  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:21.426515  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:24.498557  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:30.578502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:33.650482  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:39.730548  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:42.802507  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:48.882487  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:51.954507  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:58.034498  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:01.106530  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:07.186513  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:10.258485  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:16.338519  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:19.410521  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:25.490436  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:28.562555  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:34.642534  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:37.714514  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:43.794519  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:46.866487  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:52.946514  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:56.018488  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:02.098512  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:05.170472  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:11.250485  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:14.322454  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:20.402450  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:23.474533  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:29.554541  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:32.626489  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:38.706558  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:41.778502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:47.858493  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:50.930489  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:57.010541  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:00.082537  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:06.162498  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:09.234502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:12.238620  369591 start.go:369] acquired machines lock for "no-preload-247751" in 4m33.303501223s
	I0229 02:30:12.238705  369591 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:12.238716  369591 fix.go:54] fixHost starting: 
	I0229 02:30:12.239171  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:12.239240  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:12.254984  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I0229 02:30:12.255490  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:12.255991  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:30:12.256012  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:12.256463  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:12.256668  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:12.256840  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:30:12.258341  369591 fix.go:102] recreateIfNeeded on no-preload-247751: state=Stopped err=<nil>
	I0229 02:30:12.258371  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	W0229 02:30:12.258522  369591 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:12.260176  369591 out.go:177] * Restarting existing kvm2 VM for "no-preload-247751" ...
	I0229 02:30:12.261521  369591 main.go:141] libmachine: (no-preload-247751) Calling .Start
	I0229 02:30:12.261678  369591 main.go:141] libmachine: (no-preload-247751) Ensuring networks are active...
	I0229 02:30:12.262375  369591 main.go:141] libmachine: (no-preload-247751) Ensuring network default is active
	I0229 02:30:12.262642  369591 main.go:141] libmachine: (no-preload-247751) Ensuring network mk-no-preload-247751 is active
	I0229 02:30:12.262962  369591 main.go:141] libmachine: (no-preload-247751) Getting domain xml...
	I0229 02:30:12.263526  369591 main.go:141] libmachine: (no-preload-247751) Creating domain...
	I0229 02:30:13.474816  369591 main.go:141] libmachine: (no-preload-247751) Waiting to get IP...
	I0229 02:30:13.475810  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:13.476251  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:13.476305  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:13.476230  370599 retry.go:31] will retry after 302.404435ms: waiting for machine to come up
	I0229 02:30:13.780776  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:13.781237  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:13.781265  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:13.781193  370599 retry.go:31] will retry after 364.673363ms: waiting for machine to come up
	I0229 02:30:12.236310  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:12.236352  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:30:12.238426  369508 machine.go:91] provisioned docker machine in 4m37.406828317s
	I0229 02:30:12.238513  369508 fix.go:56] fixHost completed within 4m37.429140371s
	I0229 02:30:12.238526  369508 start.go:83] releasing machines lock for "embed-certs-915633", held for 4m37.429164063s
	W0229 02:30:12.238553  369508 start.go:694] error starting host: provision: host is not running
	W0229 02:30:12.238763  369508 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0229 02:30:12.238784  369508 start.go:709] Will try again in 5 seconds ...
	I0229 02:30:14.148040  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:14.148530  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:14.148561  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:14.148471  370599 retry.go:31] will retry after 430.606986ms: waiting for machine to come up
	I0229 02:30:14.581180  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:14.581649  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:14.581679  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:14.581598  370599 retry.go:31] will retry after 557.726488ms: waiting for machine to come up
	I0229 02:30:15.141289  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:15.141736  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:15.141767  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:15.141675  370599 retry.go:31] will retry after 611.257074ms: waiting for machine to come up
	I0229 02:30:15.754464  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:15.754802  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:15.754831  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:15.754752  370599 retry.go:31] will retry after 905.484801ms: waiting for machine to come up
	I0229 02:30:16.661691  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:16.662072  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:16.662099  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:16.662020  370599 retry.go:31] will retry after 1.007584217s: waiting for machine to come up
	I0229 02:30:17.671565  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:17.672118  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:17.672159  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:17.672048  370599 retry.go:31] will retry after 933.310317ms: waiting for machine to come up
	I0229 02:30:18.607108  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:18.607473  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:18.607496  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:18.607426  370599 retry.go:31] will retry after 1.135856775s: waiting for machine to come up
	I0229 02:30:17.239210  369508 start.go:365] acquiring machines lock for embed-certs-915633: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:30:19.744656  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:19.745017  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:19.745047  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:19.744969  370599 retry.go:31] will retry after 2.184552748s: waiting for machine to come up
	I0229 02:30:21.932313  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:21.932764  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:21.932794  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:21.932711  370599 retry.go:31] will retry after 2.256573009s: waiting for machine to come up
	I0229 02:30:24.191551  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:24.191987  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:24.192016  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:24.191948  370599 retry.go:31] will retry after 3.0850751s: waiting for machine to come up
	I0229 02:30:27.278526  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:27.278941  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:27.278977  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:27.278914  370599 retry.go:31] will retry after 3.196492358s: waiting for machine to come up
	I0229 02:30:31.627482  369869 start.go:369] acquired machines lock for "default-k8s-diff-port-071485" in 4m6.129938439s
	I0229 02:30:31.627553  369869 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:31.627561  369869 fix.go:54] fixHost starting: 
	I0229 02:30:31.628005  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:31.628052  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:31.645217  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I0229 02:30:31.645607  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:31.646146  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:30:31.646179  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:31.646526  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:31.646754  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:31.646941  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:30:31.648372  369869 fix.go:102] recreateIfNeeded on default-k8s-diff-port-071485: state=Stopped err=<nil>
	I0229 02:30:31.648410  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	W0229 02:30:31.648603  369869 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:31.650778  369869 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-071485" ...
	I0229 02:30:30.479186  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.479664  369591 main.go:141] libmachine: (no-preload-247751) Found IP for machine: 192.168.72.114
	I0229 02:30:30.479694  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has current primary IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.479705  369591 main.go:141] libmachine: (no-preload-247751) Reserving static IP address...
	I0229 02:30:30.480161  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "no-preload-247751", mac: "52:54:00:fa:c1:ec", ip: "192.168.72.114"} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.480199  369591 main.go:141] libmachine: (no-preload-247751) DBG | skip adding static IP to network mk-no-preload-247751 - found existing host DHCP lease matching {name: "no-preload-247751", mac: "52:54:00:fa:c1:ec", ip: "192.168.72.114"}
	I0229 02:30:30.480213  369591 main.go:141] libmachine: (no-preload-247751) Reserved static IP address: 192.168.72.114
	I0229 02:30:30.480233  369591 main.go:141] libmachine: (no-preload-247751) Waiting for SSH to be available...
	I0229 02:30:30.480246  369591 main.go:141] libmachine: (no-preload-247751) DBG | Getting to WaitForSSH function...
	I0229 02:30:30.482557  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.482907  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.482935  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.483110  369591 main.go:141] libmachine: (no-preload-247751) DBG | Using SSH client type: external
	I0229 02:30:30.483136  369591 main.go:141] libmachine: (no-preload-247751) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa (-rw-------)
	I0229 02:30:30.483166  369591 main.go:141] libmachine: (no-preload-247751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:30:30.483180  369591 main.go:141] libmachine: (no-preload-247751) DBG | About to run SSH command:
	I0229 02:30:30.483197  369591 main.go:141] libmachine: (no-preload-247751) DBG | exit 0
	I0229 02:30:30.610329  369591 main.go:141] libmachine: (no-preload-247751) DBG | SSH cmd err, output: <nil>: 
	I0229 02:30:30.610691  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetConfigRaw
	I0229 02:30:30.611393  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:30.614007  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.614393  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.614426  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.614689  369591 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/config.json ...
	I0229 02:30:30.614872  369591 machine.go:88] provisioning docker machine ...
	I0229 02:30:30.614892  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:30.615096  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.615250  369591 buildroot.go:166] provisioning hostname "no-preload-247751"
	I0229 02:30:30.615272  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.615444  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.617525  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.617800  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.617835  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.617898  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.618095  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.618289  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.618424  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.618564  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:30.618790  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:30.618807  369591 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-247751 && echo "no-preload-247751" | sudo tee /etc/hostname
	I0229 02:30:30.740902  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-247751
	
	I0229 02:30:30.740952  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.743879  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.744353  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.744396  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.744584  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.744843  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.745014  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.745197  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.745351  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:30.745525  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:30.745543  369591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-247751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-247751/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-247751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:30:30.867175  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:30.867209  369591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:30:30.867229  369591 buildroot.go:174] setting up certificates
	I0229 02:30:30.867240  369591 provision.go:83] configureAuth start
	I0229 02:30:30.867248  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.867521  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:30.870143  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.870443  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.870464  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.870678  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.872992  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.873434  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.873463  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.873643  369591 provision.go:138] copyHostCerts
	I0229 02:30:30.873713  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:30:30.873740  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:30:30.873830  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:30:30.873937  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:30:30.873948  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:30:30.873992  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:30:30.874070  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:30:30.874080  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:30:30.874110  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:30:30.874240  369591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.no-preload-247751 san=[192.168.72.114 192.168.72.114 localhost 127.0.0.1 minikube no-preload-247751]
	I0229 02:30:30.921711  369591 provision.go:172] copyRemoteCerts
	I0229 02:30:30.921769  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:30:30.921793  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.924128  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.924436  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.924474  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.924628  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.924815  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.924975  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.925073  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.009229  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:30:31.035962  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:30:31.062947  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:30:31.089920  369591 provision.go:86] duration metric: configureAuth took 222.667724ms
	I0229 02:30:31.089947  369591 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:30:31.090145  369591 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:30:31.090256  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.092831  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.093148  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.093192  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.093338  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.093511  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.093699  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.093864  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.094032  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:31.094196  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:31.094211  369591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:30:31.381995  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:30:31.382023  369591 machine.go:91] provisioned docker machine in 767.136363ms
	I0229 02:30:31.382036  369591 start.go:300] post-start starting for "no-preload-247751" (driver="kvm2")
	I0229 02:30:31.382049  369591 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:30:31.382066  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.382560  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:30:31.382596  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.385219  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.385574  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.385602  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.385742  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.385955  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.386091  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.386254  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.469621  369591 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:30:31.474615  369591 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:30:31.474640  369591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:30:31.474702  369591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:30:31.474772  369591 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:30:31.474867  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:30:31.484964  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:31.512459  369591 start.go:303] post-start completed in 130.406384ms
	I0229 02:30:31.512519  369591 fix.go:56] fixHost completed within 19.27376704s
	I0229 02:30:31.512569  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.515169  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.515568  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.515596  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.515717  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.515944  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.516108  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.516260  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.516417  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:31.516592  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:31.516605  369591 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:30:31.627335  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173831.594794890
	
	I0229 02:30:31.627357  369591 fix.go:206] guest clock: 1709173831.594794890
	I0229 02:30:31.627366  369591 fix.go:219] Guest: 2024-02-29 02:30:31.59479489 +0000 UTC Remote: 2024-02-29 02:30:31.512545974 +0000 UTC m=+292.733991044 (delta=82.248916ms)
	I0229 02:30:31.627395  369591 fix.go:190] guest clock delta is within tolerance: 82.248916ms
	I0229 02:30:31.627403  369591 start.go:83] releasing machines lock for "no-preload-247751", held for 19.38873796s
	I0229 02:30:31.627429  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.627713  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:31.630486  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.630930  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.630959  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.631131  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631640  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631830  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631920  369591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:30:31.631983  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.632122  369591 ssh_runner.go:195] Run: cat /version.json
	I0229 02:30:31.632160  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.634658  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.634874  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635050  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.635079  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635348  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.635354  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.635379  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635478  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.635566  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.635633  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.635758  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.635768  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.635934  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.635940  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.719735  369591 ssh_runner.go:195] Run: systemctl --version
	I0229 02:30:31.739831  369591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:30:31.891138  369591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:30:31.899497  369591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:30:31.899569  369591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:30:31.921755  369591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:30:31.921785  369591 start.go:475] detecting cgroup driver to use...
	I0229 02:30:31.921896  369591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:30:31.938157  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:30:31.952761  369591 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:30:31.952834  369591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:30:31.966785  369591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:30:31.980931  369591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:30:32.091879  369591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:30:32.261190  369591 docker.go:233] disabling docker service ...
	I0229 02:30:32.261272  369591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:30:32.278862  369591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:30:32.295382  369591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:30:32.433426  369591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:30:32.557975  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:30:32.573791  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:30:32.595797  369591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:30:32.595848  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.608978  369591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:30:32.609042  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.621681  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.634251  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.647107  369591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:30:32.660478  369591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:30:32.672596  369591 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:30:32.672662  369591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:30:32.688480  369591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:30:32.700769  369591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:30:32.823703  369591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:30:33.004444  369591 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:30:33.004531  369591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:30:33.010801  369591 start.go:543] Will wait 60s for crictl version
	I0229 02:30:33.010862  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.015224  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:30:33.064627  369591 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:30:33.064721  369591 ssh_runner.go:195] Run: crio --version
	I0229 02:30:33.108265  369591 ssh_runner.go:195] Run: crio --version
	I0229 02:30:33.142639  369591 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 02:30:33.144169  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:33.147250  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:33.147609  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:33.147644  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:33.147836  369591 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 02:30:33.153138  369591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:33.169427  369591 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:30:33.169481  369591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:33.214079  369591 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 02:30:33.214113  369591 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:30:33.214193  369591 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:33.214216  369591 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.214252  369591 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.214276  369591 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.214335  369591 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.214323  369591 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.214354  369591 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0229 02:30:33.214241  369591 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.215862  369591 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.215880  369591 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0229 02:30:33.215862  369591 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.215928  369591 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.215947  369591 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:33.216082  369591 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.216136  369591 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.216252  369591 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.348095  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0229 02:30:33.434211  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.496911  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.499249  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.503235  369591 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0229 02:30:33.503274  369591 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.503307  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.507506  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.548265  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.551287  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.589427  369591 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0229 02:30:33.589474  369591 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.589523  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.590660  369591 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0229 02:30:33.590688  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.590708  369591 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.590763  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.636886  369591 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0229 02:30:33.636934  369591 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.637001  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.664221  369591 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0229 02:30:33.664266  369591 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.664316  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.691890  369591 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0229 02:30:33.691945  369591 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.691978  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.691993  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.692003  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.692096  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.692107  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.692104  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.692165  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.793616  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:33.793708  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.793723  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:33.793772  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:33.793839  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0229 02:30:33.793853  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:33.793856  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0229 02:30:33.793884  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0229 02:30:33.793902  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.793910  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:33.793914  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:33.793936  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:31.652037  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Start
	I0229 02:30:31.652202  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring networks are active...
	I0229 02:30:31.652984  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring network default is active
	I0229 02:30:31.653457  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring network mk-default-k8s-diff-port-071485 is active
	I0229 02:30:31.653909  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Getting domain xml...
	I0229 02:30:31.654724  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Creating domain...
	I0229 02:30:32.911561  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting to get IP...
	I0229 02:30:32.912505  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:32.912932  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:32.913032  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:32.912928  370716 retry.go:31] will retry after 285.213813ms: waiting for machine to come up
	I0229 02:30:33.199327  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.199733  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.199764  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.199678  370716 retry.go:31] will retry after 334.890426ms: waiting for machine to come up
	I0229 02:30:33.536492  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.536976  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.537006  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.536924  370716 retry.go:31] will retry after 344.946846ms: waiting for machine to come up
	I0229 02:30:33.883432  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.883911  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.883941  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.883858  370716 retry.go:31] will retry after 516.135135ms: waiting for machine to come up
	I0229 02:30:34.401167  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.401592  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.401621  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:34.401543  370716 retry.go:31] will retry after 538.013174ms: waiting for machine to come up
	I0229 02:30:34.941529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.942080  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.942116  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:34.942039  370716 retry.go:31] will retry after 883.013858ms: waiting for machine to come up
	I0229 02:30:33.850786  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0229 02:30:33.850868  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0229 02:30:33.850977  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:34.154343  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:36.987957  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (3.194013383s)
	I0229 02:30:36.987999  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0229 02:30:36.988100  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.194139784s)
	I0229 02:30:36.988127  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0229 02:30:36.988148  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.194207246s)
	I0229 02:30:36.988178  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0229 02:30:36.988156  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:36.988191  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.194323563s)
	I0229 02:30:36.988206  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0229 02:30:36.988236  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:36.988269  369591 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.833890629s)
	I0229 02:30:36.988240  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.13724749s)
	I0229 02:30:36.988310  369591 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0229 02:30:36.988331  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0229 02:30:36.988343  369591 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:36.988375  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:36.993483  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:38.351556  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.363290185s)
	I0229 02:30:38.351599  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0229 02:30:38.351633  369591 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:38.351632  369591 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.358113254s)
	I0229 02:30:38.351686  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0229 02:30:38.351705  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:38.351782  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:35.827402  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:35.827906  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:35.827932  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:35.827872  370716 retry.go:31] will retry after 902.653821ms: waiting for machine to come up
	I0229 02:30:36.732470  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:36.732925  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:36.732957  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:36.732863  370716 retry.go:31] will retry after 1.322376383s: waiting for machine to come up
	I0229 02:30:38.057306  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:38.057842  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:38.057874  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:38.057790  370716 retry.go:31] will retry after 1.16249498s: waiting for machine to come up
	I0229 02:30:39.221714  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:39.222197  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:39.222236  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:39.222156  370716 retry.go:31] will retry after 1.912383064s: waiting for machine to come up
	I0229 02:30:42.350149  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.998331984s)
	I0229 02:30:42.350198  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0229 02:30:42.350214  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.99848453s)
	I0229 02:30:42.350266  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0229 02:30:42.350305  369591 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:42.350357  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:41.135736  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:41.136113  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:41.136144  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:41.136058  370716 retry.go:31] will retry after 2.823296742s: waiting for machine to come up
	I0229 02:30:43.960885  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:43.961677  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:43.961703  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:43.961582  370716 retry.go:31] will retry after 3.266272258s: waiting for machine to come up
	I0229 02:30:44.528869  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.178478896s)
	I0229 02:30:44.528915  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0229 02:30:44.528947  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:44.529014  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:46.991074  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462030604s)
	I0229 02:30:46.991103  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0229 02:30:46.991129  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:46.991195  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:47.229005  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:47.229478  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:47.229511  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:47.229417  370716 retry.go:31] will retry after 3.429712893s: waiting for machine to come up
	I0229 02:30:51.887858  370051 start.go:369] acquired machines lock for "old-k8s-version-275488" in 4m15.644916266s
	I0229 02:30:51.887935  370051 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:51.887944  370051 fix.go:54] fixHost starting: 
	I0229 02:30:51.888374  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:51.888428  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:51.905851  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0229 02:30:51.906292  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:51.906778  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:30:51.906806  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:51.907250  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:51.907459  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:30:51.907631  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetState
	I0229 02:30:51.909061  370051 fix.go:102] recreateIfNeeded on old-k8s-version-275488: state=Stopped err=<nil>
	I0229 02:30:51.909093  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	W0229 02:30:51.909251  370051 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:51.911318  370051 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275488" ...
	I0229 02:30:50.662939  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.663341  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Found IP for machine: 192.168.61.233
	I0229 02:30:50.663366  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Reserving static IP address...
	I0229 02:30:50.663404  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has current primary IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.663745  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-071485", mac: "52:54:00:81:f9:08", ip: "192.168.61.233"} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.663781  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Reserved static IP address: 192.168.61.233
	I0229 02:30:50.663804  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | skip adding static IP to network mk-default-k8s-diff-port-071485 - found existing host DHCP lease matching {name: "default-k8s-diff-port-071485", mac: "52:54:00:81:f9:08", ip: "192.168.61.233"}
	I0229 02:30:50.663819  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for SSH to be available...
	I0229 02:30:50.663830  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Getting to WaitForSSH function...
	I0229 02:30:50.665924  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.666270  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.666306  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.666411  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Using SSH client type: external
	I0229 02:30:50.666435  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa (-rw-------)
	I0229 02:30:50.666464  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:30:50.666477  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | About to run SSH command:
	I0229 02:30:50.666489  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | exit 0
	I0229 02:30:50.794598  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | SSH cmd err, output: <nil>: 
	I0229 02:30:50.795011  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetConfigRaw
	I0229 02:30:50.795753  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:50.798443  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.798796  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.798822  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.799151  369869 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/config.json ...
	I0229 02:30:50.799410  369869 machine.go:88] provisioning docker machine ...
	I0229 02:30:50.799440  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:50.799684  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:50.799937  369869 buildroot.go:166] provisioning hostname "default-k8s-diff-port-071485"
	I0229 02:30:50.799963  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:50.800129  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:50.802457  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.802786  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.802813  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.802923  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:50.803087  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.803281  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.803393  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:50.803527  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:50.803744  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:50.803757  369869 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-071485 && echo "default-k8s-diff-port-071485" | sudo tee /etc/hostname
	I0229 02:30:50.930812  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-071485
	
	I0229 02:30:50.930849  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:50.933650  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.934017  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.934057  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.934217  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:50.934458  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.934651  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.934813  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:50.934964  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:50.935141  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:50.935159  369869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-071485' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-071485/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-071485' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:30:51.057233  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:51.057266  369869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:30:51.057307  369869 buildroot.go:174] setting up certificates
	I0229 02:30:51.057321  369869 provision.go:83] configureAuth start
	I0229 02:30:51.057335  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:51.057615  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:51.060233  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.060563  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.060595  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.060707  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.062583  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.062889  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.062938  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.063065  369869 provision.go:138] copyHostCerts
	I0229 02:30:51.063121  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:30:51.063140  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:30:51.063193  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:30:51.063290  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:30:51.063304  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:30:51.063332  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:30:51.063396  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:30:51.063403  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:30:51.063420  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:30:51.063482  369869 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-071485 san=[192.168.61.233 192.168.61.233 localhost 127.0.0.1 minikube default-k8s-diff-port-071485]
	I0229 02:30:51.180356  369869 provision.go:172] copyRemoteCerts
	I0229 02:30:51.180417  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:30:51.180446  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.182981  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.183262  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.183295  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.183465  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.183656  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.183814  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.183958  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.270548  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:30:51.297136  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 02:30:51.323133  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:30:51.349241  369869 provision.go:86] duration metric: configureAuth took 291.905825ms
	I0229 02:30:51.349269  369869 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:30:51.349453  369869 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:30:51.349529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.352119  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.352473  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.352503  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.352658  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.352839  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.353009  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.353122  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.353304  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:51.353480  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:51.353495  369869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:30:51.639987  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:30:51.640022  369869 machine.go:91] provisioned docker machine in 840.591751ms
	I0229 02:30:51.640041  369869 start.go:300] post-start starting for "default-k8s-diff-port-071485" (driver="kvm2")
	I0229 02:30:51.640057  369869 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:30:51.640087  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.640450  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:30:51.640486  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.643118  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.643427  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.643464  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.643661  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.643871  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.644025  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.644164  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.730150  369869 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:30:51.735109  369869 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:30:51.735135  369869 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:30:51.735207  369869 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:30:51.735298  369869 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:30:51.735416  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:30:51.745416  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:51.771727  369869 start.go:303] post-start completed in 131.66845ms
	I0229 02:30:51.771756  369869 fix.go:56] fixHost completed within 20.144195498s
	I0229 02:30:51.771782  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.774300  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.774582  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.774610  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.774744  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.774972  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.775153  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.775295  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.775481  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:51.775648  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:51.775659  369869 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:30:51.887656  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173851.865903243
	
	I0229 02:30:51.887683  369869 fix.go:206] guest clock: 1709173851.865903243
	I0229 02:30:51.887691  369869 fix.go:219] Guest: 2024-02-29 02:30:51.865903243 +0000 UTC Remote: 2024-02-29 02:30:51.771760886 +0000 UTC m=+266.432013426 (delta=94.142357ms)
	I0229 02:30:51.887738  369869 fix.go:190] guest clock delta is within tolerance: 94.142357ms
	I0229 02:30:51.887744  369869 start.go:83] releasing machines lock for "default-k8s-diff-port-071485", held for 20.260217484s
	I0229 02:30:51.887771  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.888047  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:51.890930  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.891264  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.891294  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.891491  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892002  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892209  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892299  369869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:30:51.892370  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.892472  369869 ssh_runner.go:195] Run: cat /version.json
	I0229 02:30:51.892503  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.895178  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895415  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895591  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.895626  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895769  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.895800  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895820  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.895966  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.896055  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.896141  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.896212  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.896277  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.896367  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.896447  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.976085  369869 ssh_runner.go:195] Run: systemctl --version
	I0229 02:30:52.001946  369869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:30:52.156753  369869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:30:52.164196  369869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:30:52.164302  369869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:30:52.189176  369869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:30:52.189201  369869 start.go:475] detecting cgroup driver to use...
	I0229 02:30:52.189281  369869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:30:52.207647  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:30:52.223752  369869 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:30:52.223842  369869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:30:52.246026  369869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:30:52.262180  369869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:30:52.409077  369869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:30:52.583777  369869 docker.go:233] disabling docker service ...
	I0229 02:30:52.583850  369869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:30:52.601434  369869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:30:52.617382  369869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:30:52.757258  369869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:30:52.898036  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:30:52.915787  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:30:52.939344  369869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:30:52.939417  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.951659  369869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:30:52.951722  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.963072  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.974800  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.986490  369869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:30:52.998630  369869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:30:53.009783  369869 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:30:53.009862  369869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:30:53.026356  369869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:30:53.038720  369869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:30:53.171220  369869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:30:53.326032  369869 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:30:53.326102  369869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:30:53.332369  369869 start.go:543] Will wait 60s for crictl version
	I0229 02:30:53.332431  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:30:53.336784  369869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:30:53.378780  369869 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:30:53.378902  369869 ssh_runner.go:195] Run: crio --version
	I0229 02:30:53.411158  369869 ssh_runner.go:195] Run: crio --version
	I0229 02:30:53.447038  369869 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 02:30:49.053324  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.062103665s)
	I0229 02:30:49.053353  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0229 02:30:49.053378  369591 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:49.053426  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:49.910791  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0229 02:30:49.910854  369591 cache_images.go:123] Successfully loaded all cached images
	I0229 02:30:49.910862  369591 cache_images.go:92] LoadImages completed in 16.696734078s
	I0229 02:30:49.910994  369591 ssh_runner.go:195] Run: crio config
	I0229 02:30:49.961413  369591 cni.go:84] Creating CNI manager for ""
	I0229 02:30:49.961435  369591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:30:49.961456  369591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:30:49.961509  369591 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.114 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-247751 NodeName:no-preload-247751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:30:49.961701  369591 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-247751"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:30:49.961801  369591 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-247751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:30:49.961866  369591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 02:30:49.973105  369591 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:30:49.973170  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:30:49.983178  369591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0229 02:30:50.001511  369591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 02:30:50.019574  369591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0229 02:30:50.037993  369591 ssh_runner.go:195] Run: grep 192.168.72.114	control-plane.minikube.internal$ /etc/hosts
	I0229 02:30:50.042075  369591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:50.054761  369591 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751 for IP: 192.168.72.114
	I0229 02:30:50.054796  369591 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:30:50.054976  369591 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:30:50.055031  369591 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:30:50.055146  369591 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/client.key
	I0229 02:30:50.055243  369591 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.key.9adeb8c5
	I0229 02:30:50.055310  369591 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.key
	I0229 02:30:50.055440  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:30:50.055481  369591 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:30:50.055502  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:30:50.055542  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:30:50.055577  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:30:50.055658  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:30:50.055728  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:50.056454  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:30:50.083764  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:30:50.110733  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:30:50.139180  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:30:50.167000  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:30:50.194044  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:30:50.220671  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:30:50.247561  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:30:50.274577  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:30:50.300997  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:30:50.327718  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:30:50.355463  369591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:30:50.374921  369591 ssh_runner.go:195] Run: openssl version
	I0229 02:30:50.381614  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:30:50.393546  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.398532  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.398594  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.404719  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:30:50.416507  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:30:50.428072  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.433031  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.433106  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.439174  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:30:50.450778  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:30:50.462238  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.467219  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.467269  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.473395  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:30:50.484817  369591 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:30:50.489643  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:30:50.496274  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:30:50.502579  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:30:50.508665  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:30:50.514827  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:30:50.520958  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:30:50.527032  369591 kubeadm.go:404] StartCluster: {Name:no-preload-247751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-247751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:30:50.527147  369591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:30:50.527194  369591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:30:50.565834  369591 cri.go:89] found id: ""
	I0229 02:30:50.565931  369591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:30:50.577305  369591 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:30:50.577354  369591 kubeadm.go:636] restartCluster start
	I0229 02:30:50.577408  369591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:30:50.587881  369591 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:50.588896  369591 kubeconfig.go:92] found "no-preload-247751" server: "https://192.168.72.114:8443"
	I0229 02:30:50.591223  369591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:30:50.601374  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:50.601434  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:50.613730  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:51.102422  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:51.102539  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:51.116483  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:51.601564  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:51.601657  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:51.615481  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:52.102039  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:52.102123  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:52.121300  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:52.601999  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:52.602093  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:52.618701  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.102291  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:53.102403  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:53.117898  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.602410  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:53.602496  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:53.618760  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.448437  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:53.451649  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:53.451998  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:53.452052  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:53.452302  369869 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 02:30:53.458709  369869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:53.477744  369869 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:30:53.477831  369869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:53.527511  369869 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 02:30:53.527593  369869 ssh_runner.go:195] Run: which lz4
	I0229 02:30:53.532370  369869 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:30:53.537149  369869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:30:53.537179  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 02:30:51.912520  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .Start
	I0229 02:30:51.912688  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring networks are active...
	I0229 02:30:51.913511  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network default is active
	I0229 02:30:51.913929  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network mk-old-k8s-version-275488 is active
	I0229 02:30:51.914378  370051 main.go:141] libmachine: (old-k8s-version-275488) Getting domain xml...
	I0229 02:30:51.915191  370051 main.go:141] libmachine: (old-k8s-version-275488) Creating domain...
	I0229 02:30:53.179261  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting to get IP...
	I0229 02:30:53.180359  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.180800  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.180922  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.180789  370858 retry.go:31] will retry after 282.360524ms: waiting for machine to come up
	I0229 02:30:53.465135  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.465708  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.465742  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.465651  370858 retry.go:31] will retry after 341.876004ms: waiting for machine to come up
	I0229 02:30:53.809322  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.809734  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.809876  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.809797  370858 retry.go:31] will retry after 356.208548ms: waiting for machine to come up
	I0229 02:30:54.167329  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.167824  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.167852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.167760  370858 retry.go:31] will retry after 395.76503ms: waiting for machine to come up
	I0229 02:30:54.565496  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.565976  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.566004  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.565933  370858 retry.go:31] will retry after 617.898012ms: waiting for machine to come up
	I0229 02:30:55.185679  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:55.186193  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:55.186237  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:55.186144  370858 retry.go:31] will retry after 911.947678ms: waiting for machine to come up
	I0229 02:30:56.099334  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:56.099788  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:56.099815  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:56.099726  370858 retry.go:31] will retry after 1.132066509s: waiting for machine to come up
	I0229 02:30:54.102304  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:54.102485  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:54.123193  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:54.601763  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:54.601890  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:54.621846  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.102417  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:55.102503  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:55.129010  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.601478  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:55.601532  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:55.620169  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:56.101701  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:56.101776  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:56.121369  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:56.601447  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:56.601550  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:56.617079  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.101509  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:57.101648  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:57.121691  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.601658  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:57.601754  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:57.620357  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:58.101829  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:58.101921  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:58.115818  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:58.602403  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:58.602509  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:58.621857  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.599398  369869 crio.go:444] Took 2.067052 seconds to copy over tarball
	I0229 02:30:55.599501  369869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:30:58.543850  369869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944309258s)
	I0229 02:30:58.543884  369869 crio.go:451] Took 2.944447 seconds to extract the tarball
	I0229 02:30:58.543896  369869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:30:58.592492  369869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:58.751479  369869 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:30:58.751509  369869 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:30:58.751576  369869 ssh_runner.go:195] Run: crio config
	I0229 02:30:58.813487  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:30:58.813515  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:30:58.813540  369869 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:30:58.813566  369869 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.233 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-071485 NodeName:default-k8s-diff-port-071485 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:30:58.813785  369869 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.233
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-071485"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:30:58.813898  369869 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-071485 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-071485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0229 02:30:58.813971  369869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:30:58.826199  369869 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:30:58.826324  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:30:58.837384  369869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0229 02:30:58.856023  369869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:30:58.876432  369869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0229 02:30:58.900684  369869 ssh_runner.go:195] Run: grep 192.168.61.233	control-plane.minikube.internal$ /etc/hosts
	I0229 02:30:58.905249  369869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:58.920007  369869 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485 for IP: 192.168.61.233
	I0229 02:30:58.920046  369869 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:30:58.920249  369869 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:30:58.920319  369869 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:30:58.920432  369869 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/client.key
	I0229 02:30:58.995037  369869 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.key.b3fc8ab0
	I0229 02:30:58.995173  369869 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.key
	I0229 02:30:58.995377  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:30:58.995430  369869 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:30:58.995451  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:30:58.995503  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:30:58.995543  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:30:58.995590  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:30:58.995653  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:58.996607  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:30:59.026487  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:30:59.054725  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:30:59.082553  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:30:59.110374  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:30:59.141972  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:30:59.170097  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:30:59.201206  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:30:59.232790  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:30:59.263940  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:30:59.292401  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:30:59.321920  369869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:30:59.343921  369869 ssh_runner.go:195] Run: openssl version
	I0229 02:30:59.351308  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:30:59.364059  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.369212  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.369302  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.375683  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:30:59.389046  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:30:59.404101  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.409433  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.409491  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.416126  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:30:59.429674  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:30:59.443405  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.448931  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.448991  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.455800  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:30:59.469013  369869 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:30:59.474745  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:30:59.481689  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:30:59.488868  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:30:59.496380  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:30:59.503593  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:30:59.510485  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:30:59.517770  369869 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-071485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-071485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.233 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:30:59.517894  369869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:30:59.517941  369869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:30:59.564631  369869 cri.go:89] found id: ""
	I0229 02:30:59.564718  369869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:30:59.578812  369869 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:30:59.578881  369869 kubeadm.go:636] restartCluster start
	I0229 02:30:59.578954  369869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:30:59.592900  369869 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:59.593909  369869 kubeconfig.go:92] found "default-k8s-diff-port-071485" server: "https://192.168.61.233:8444"
	I0229 02:30:59.596083  369869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:30:59.609384  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.609466  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.625617  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.110139  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.110282  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.127301  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.233610  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:57.234113  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:57.234145  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:57.234063  370858 retry.go:31] will retry after 1.238348525s: waiting for machine to come up
	I0229 02:30:58.474146  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:58.474696  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:58.474733  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:58.474642  370858 retry.go:31] will retry after 1.373712981s: waiting for machine to come up
	I0229 02:30:59.850075  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:59.850504  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:59.850526  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:59.850460  370858 retry.go:31] will retry after 2.156069813s: waiting for machine to come up
	I0229 02:30:59.101727  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.101812  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.120465  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:59.602060  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.602155  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.620588  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.102108  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.102203  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.120822  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.602443  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.602545  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.616796  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.616835  369591 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:00.616858  369591 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:00.616873  369591 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:00.616940  369591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:00.661747  369591 cri.go:89] found id: ""
	I0229 02:31:00.661869  369591 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:00.684098  369591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:00.696989  369591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:00.697059  369591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:00.708553  369591 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:00.708583  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:00.827929  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.578572  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.818119  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.892891  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.964926  369591 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:01.965037  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:02.466098  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:02.965290  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:03.465897  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:03.483060  369591 api_server.go:72] duration metric: took 1.518135432s to wait for apiserver process to appear ...
	I0229 02:31:03.483103  369591 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:03.483127  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:00.610179  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.610299  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.630460  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:01.109543  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:01.109680  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:01.129578  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:01.610203  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:01.610301  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:01.630078  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.109835  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:02.109945  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:02.127400  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.610160  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:02.610269  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:02.630581  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:03.109702  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:03.109836  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:03.129754  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:03.610303  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:03.610389  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:03.629702  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:04.110325  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:04.110459  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:04.128740  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:04.610305  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:04.610403  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:04.624716  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:05.110349  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:05.110457  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:05.130070  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.007911  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:02.008381  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:02.008409  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:02.008330  370858 retry.go:31] will retry after 1.864134048s: waiting for machine to come up
	I0229 02:31:03.873997  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:03.874606  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:03.874653  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:03.874547  370858 retry.go:31] will retry after 2.45659808s: waiting for machine to come up
	I0229 02:31:06.111554  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:06.111581  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:06.111596  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.191055  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:06.191090  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:06.483401  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.489220  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:06.489254  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:06.983921  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.988354  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:06.988430  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:07.483305  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:07.489830  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0229 02:31:07.497146  369591 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:31:07.497187  369591 api_server.go:131] duration metric: took 4.014075718s to wait for apiserver health ...
	I0229 02:31:07.497201  369591 cni.go:84] Creating CNI manager for ""
	I0229 02:31:07.497210  369591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:07.498785  369591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:07.500032  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:31:07.530625  369591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:31:07.594249  369591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:31:07.604940  369591 system_pods.go:59] 8 kube-system pods found
	I0229 02:31:07.604973  369591 system_pods.go:61] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:31:07.604980  369591 system_pods.go:61] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:31:07.604989  369591 system_pods.go:61] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:31:07.604995  369591 system_pods.go:61] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:31:07.605003  369591 system_pods.go:61] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:31:07.605015  369591 system_pods.go:61] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:31:07.605022  369591 system_pods.go:61] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:31:07.605032  369591 system_pods.go:61] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:31:07.605052  369591 system_pods.go:74] duration metric: took 10.776743ms to wait for pod list to return data ...
	I0229 02:31:07.605061  369591 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:31:07.608034  369591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:31:07.608059  369591 node_conditions.go:123] node cpu capacity is 2
	I0229 02:31:07.608073  369591 node_conditions.go:105] duration metric: took 3.004467ms to run NodePressure ...
	I0229 02:31:07.608096  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:07.975871  369591 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:31:07.980949  369591 kubeadm.go:787] kubelet initialised
	I0229 02:31:07.980970  369591 kubeadm.go:788] duration metric: took 5.071971ms waiting for restarted kubelet to initialise ...
	I0229 02:31:07.980979  369591 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:07.986764  369591 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:07.992673  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "coredns-76f75df574-2z5w8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.992698  369591 pod_ready.go:81] duration metric: took 5.911106ms waiting for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:07.992707  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "coredns-76f75df574-2z5w8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.992717  369591 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:07.997300  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "etcd-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.997322  369591 pod_ready.go:81] duration metric: took 4.594827ms waiting for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:07.997330  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "etcd-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.997335  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.004032  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-apiserver-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.004052  369591 pod_ready.go:81] duration metric: took 6.71117ms waiting for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.004060  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-apiserver-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.004066  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.009947  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.009985  369591 pod_ready.go:81] duration metric: took 5.909502ms waiting for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.010001  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.010009  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.398938  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-proxy-cdc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.398965  369591 pod_ready.go:81] duration metric: took 388.944943ms waiting for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.398975  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-proxy-cdc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.398982  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.797706  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-scheduler-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.797733  369591 pod_ready.go:81] duration metric: took 398.745142ms waiting for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.797744  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-scheduler-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.797751  369591 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:09.198467  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:09.198496  369591 pod_ready.go:81] duration metric: took 400.737315ms waiting for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:09.198506  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:09.198511  369591 pod_ready.go:38] duration metric: took 1.217523271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:09.198530  369591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:31:09.211194  369591 ops.go:34] apiserver oom_adj: -16
	I0229 02:31:09.211222  369591 kubeadm.go:640] restartCluster took 18.633858862s
	I0229 02:31:09.211232  369591 kubeadm.go:406] StartCluster complete in 18.684207766s
	I0229 02:31:09.211263  369591 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:09.211346  369591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:31:09.212899  369591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:09.213213  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:31:09.213318  369591 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:31:09.213406  369591 addons.go:69] Setting storage-provisioner=true in profile "no-preload-247751"
	I0229 02:31:09.213426  369591 addons.go:69] Setting default-storageclass=true in profile "no-preload-247751"
	I0229 02:31:09.213446  369591 addons.go:69] Setting metrics-server=true in profile "no-preload-247751"
	I0229 02:31:09.213463  369591 addons.go:234] Setting addon metrics-server=true in "no-preload-247751"
	I0229 02:31:09.213465  369591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-247751"
	I0229 02:31:09.213463  369591 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0229 02:31:09.213472  369591 addons.go:243] addon metrics-server should already be in state true
	I0229 02:31:09.213435  369591 addons.go:234] Setting addon storage-provisioner=true in "no-preload-247751"
	W0229 02:31:09.213515  369591 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:31:09.213519  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.213541  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.213915  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213924  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213944  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.213944  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.213943  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213978  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.218976  369591 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-247751" context rescaled to 1 replicas
	I0229 02:31:09.219015  369591 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:31:09.220657  369591 out.go:177] * Verifying Kubernetes components...
	I0229 02:31:09.221954  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:31:09.230064  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0229 02:31:09.230528  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.231030  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.231053  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.231526  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.231762  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.233032  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0229 02:31:09.233487  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.233929  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I0229 02:31:09.234003  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.234028  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.234293  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.234406  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.234784  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.234811  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.235009  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.235068  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.235163  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.235631  369591 addons.go:234] Setting addon default-storageclass=true in "no-preload-247751"
	W0229 02:31:09.235651  369591 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:31:09.235679  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.235738  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.235772  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.236123  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.236157  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.250756  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0229 02:31:09.251190  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.251830  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.251855  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.252228  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.252403  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.254210  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.256240  369591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:09.257522  369591 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:31:09.257537  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:31:09.257552  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.255418  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0229 02:31:09.255485  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0229 02:31:09.258003  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.258129  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.258432  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.258457  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.258664  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.258676  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.258822  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.258983  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.259278  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.259313  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.259533  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.261295  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.261320  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.262706  369591 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:31:05.610163  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:05.610319  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:05.627782  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:06.110424  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:06.110521  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:06.129628  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:06.610193  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:06.610330  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:06.624176  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:07.110249  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:07.110354  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:07.129955  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:07.609462  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:07.609536  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:07.623687  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:08.110263  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:08.110407  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:08.126900  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:08.610447  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:08.610520  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:08.625182  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.109675  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:09.109759  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:09.124637  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.610399  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:09.610520  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:09.630681  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.630715  369869 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:09.630757  369869 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:09.630777  369869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:09.630844  369869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:09.683876  369869 cri.go:89] found id: ""
	I0229 02:31:09.683963  369869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:09.706059  369869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:09.719868  369869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:09.719939  369869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:09.734591  369869 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:09.734622  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:09.862689  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:09.263808  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:31:09.263830  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:31:09.263849  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.261760  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.261947  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.263890  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.264339  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.264522  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.264704  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.266885  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.267339  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.267358  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.267533  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.267649  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.267782  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.267862  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.302813  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I0229 02:31:09.303329  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.303878  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.303909  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.304305  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.304509  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.306147  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.306434  369591 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:31:09.306454  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:31:09.306472  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.309029  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.309345  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.309382  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.309670  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.309872  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.310048  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.310193  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.402579  369591 node_ready.go:35] waiting up to 6m0s for node "no-preload-247751" to be "Ready" ...
	I0229 02:31:09.402756  369591 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:31:09.420259  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:31:09.426629  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:31:09.426655  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:31:09.446028  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:31:09.457219  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:31:09.457244  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:31:09.504028  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:31:09.504054  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:31:09.554137  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:31:10.485560  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.039492326s)
	I0229 02:31:10.485633  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.485646  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.485928  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065634917s)
	I0229 02:31:10.485970  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.485986  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.486053  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.486072  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.486092  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.486104  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.486112  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.486254  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.486287  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.486304  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.486320  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.487538  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.487556  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.487566  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.487543  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.487582  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.487579  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.494355  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.494374  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.494614  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.494635  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.494633  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.559201  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.005004802s)
	I0229 02:31:10.559258  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.559276  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.559592  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.559614  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.559625  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.559633  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.559899  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.559915  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.559926  369591 addons.go:470] Verifying addon metrics-server=true in "no-preload-247751"
	I0229 02:31:10.561833  369591 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:31:06.333259  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:06.333776  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:06.333811  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:06.333733  370858 retry.go:31] will retry after 3.223893936s: waiting for machine to come up
	I0229 02:31:09.559349  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:09.559937  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:09.559968  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:09.559891  370858 retry.go:31] will retry after 5.278822831s: waiting for machine to come up
	I0229 02:31:10.560171  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.563240  369591 addons.go:505] enable addons completed in 1.349905679s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:31:11.408006  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:10.805438  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.016546  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.132323  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.212201  369869 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:11.212309  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:11.713366  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.212866  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.713327  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.732027  369869 api_server.go:72] duration metric: took 1.519826457s to wait for apiserver process to appear ...
	I0229 02:31:12.732056  369869 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:12.732078  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.109299  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:15.109349  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:15.109368  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.166169  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:15.166209  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:15.232359  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.267052  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:15.267099  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.096073  369508 start.go:369] acquired machines lock for "embed-certs-915633" in 58.856797615s
	I0229 02:31:16.096132  369508 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:31:16.096144  369508 fix.go:54] fixHost starting: 
	I0229 02:31:16.096651  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:16.096692  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:16.115912  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0229 02:31:16.116419  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:16.116967  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:31:16.116999  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:16.117362  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:16.117562  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:16.117742  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:31:16.119589  369508 fix.go:102] recreateIfNeeded on embed-certs-915633: state=Stopped err=<nil>
	I0229 02:31:16.119614  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	W0229 02:31:16.119809  369508 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:31:16.121566  369508 out.go:177] * Restarting existing kvm2 VM for "embed-certs-915633" ...
	I0229 02:31:14.842498  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843049  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has current primary IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843083  370051 main.go:141] libmachine: (old-k8s-version-275488) Found IP for machine: 192.168.39.160
	I0229 02:31:14.843112  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserving static IP address...
	I0229 02:31:14.843485  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.843510  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | skip adding static IP to network mk-old-k8s-version-275488 - found existing host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"}
	I0229 02:31:14.843525  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserved static IP address: 192.168.39.160
	I0229 02:31:14.843535  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Getting to WaitForSSH function...
	I0229 02:31:14.843553  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting for SSH to be available...
	I0229 02:31:14.845739  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846087  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.846120  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846289  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH client type: external
	I0229 02:31:14.846319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa (-rw-------)
	I0229 02:31:14.846355  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:31:14.846372  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | About to run SSH command:
	I0229 02:31:14.846390  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | exit 0
	I0229 02:31:14.979384  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | SSH cmd err, output: <nil>: 
	I0229 02:31:14.979896  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetConfigRaw
	I0229 02:31:14.980716  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:14.983852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984278  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.984319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984639  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:31:14.984865  370051 machine.go:88] provisioning docker machine ...
	I0229 02:31:14.984890  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:14.985140  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985324  370051 buildroot.go:166] provisioning hostname "old-k8s-version-275488"
	I0229 02:31:14.985347  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985494  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:14.988036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988438  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.988464  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988633  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:14.988829  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989003  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989174  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:14.989361  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:14.989604  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:14.989621  370051 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275488 && echo "old-k8s-version-275488" | sudo tee /etc/hostname
	I0229 02:31:15.125564  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275488
	
	I0229 02:31:15.125605  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.128963  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129570  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.129652  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129735  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.129996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130185  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130380  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.130616  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.130872  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.130900  370051 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275488/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:15.272298  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:15.272337  370051 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:31:15.272368  370051 buildroot.go:174] setting up certificates
	I0229 02:31:15.272385  370051 provision.go:83] configureAuth start
	I0229 02:31:15.272402  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:15.272772  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:15.276382  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.276838  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.276869  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.277051  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.279927  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280298  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.280326  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280505  370051 provision.go:138] copyHostCerts
	I0229 02:31:15.280555  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:31:15.280566  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:31:15.280619  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:31:15.280749  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:31:15.280764  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:31:15.280789  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:31:15.280857  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:31:15.280871  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:31:15.280891  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:31:15.280954  370051 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275488 san=[192.168.39.160 192.168.39.160 localhost 127.0.0.1 minikube old-k8s-version-275488]
	I0229 02:31:15.360428  370051 provision.go:172] copyRemoteCerts
	I0229 02:31:15.360487  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:31:15.360512  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.363540  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.363931  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.363966  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.364154  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.364337  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.364495  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.364622  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.453643  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:31:15.483233  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 02:31:15.512164  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:31:15.543453  370051 provision.go:86] duration metric: configureAuth took 271.048547ms
	I0229 02:31:15.543484  370051 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:31:15.543705  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:31:15.543816  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.546472  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.546807  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.546835  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.547049  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.547272  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547455  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547662  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.547861  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.548035  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.548052  370051 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:31:15.835533  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:31:15.835572  370051 machine.go:91] provisioned docker machine in 850.691497ms
	I0229 02:31:15.835589  370051 start.go:300] post-start starting for "old-k8s-version-275488" (driver="kvm2")
	I0229 02:31:15.835604  370051 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:31:15.835635  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:15.835995  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:31:15.836025  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.838946  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839297  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.839330  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839460  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.839665  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.839839  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.840008  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.925849  370051 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:31:15.931227  370051 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:31:15.931260  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:31:15.931363  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:31:15.931465  370051 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:31:15.931574  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:31:15.942500  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:15.972803  370051 start.go:303] post-start completed in 137.19736ms
	I0229 02:31:15.972838  370051 fix.go:56] fixHost completed within 24.084893996s
	I0229 02:31:15.972873  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.975698  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976063  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.976093  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976279  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.976518  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976659  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976795  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.976959  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.977119  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.977130  370051 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:31:16.095892  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173876.041987567
	
	I0229 02:31:16.095917  370051 fix.go:206] guest clock: 1709173876.041987567
	I0229 02:31:16.095927  370051 fix.go:219] Guest: 2024-02-29 02:31:16.041987567 +0000 UTC Remote: 2024-02-29 02:31:15.972843681 +0000 UTC m=+279.886639354 (delta=69.143886ms)
	I0229 02:31:16.095954  370051 fix.go:190] guest clock delta is within tolerance: 69.143886ms
	I0229 02:31:16.095962  370051 start.go:83] releasing machines lock for "old-k8s-version-275488", held for 24.208056775s
	I0229 02:31:16.095996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.096336  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:16.099518  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100016  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.100060  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100189  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100751  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100955  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.101035  370051 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:31:16.101084  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.101167  370051 ssh_runner.go:195] Run: cat /version.json
	I0229 02:31:16.101190  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.104588  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.104638  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105000  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105059  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105101  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105335  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105546  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105590  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105821  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105832  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106002  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:16.106028  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106180  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.732828  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.739797  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:15.739827  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.232355  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:16.240421  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:16.240462  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.732451  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:16.740118  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 200:
	ok
	I0229 02:31:16.748529  369869 api_server.go:141] control plane version: v1.28.4
	I0229 02:31:16.748567  369869 api_server.go:131] duration metric: took 4.0165029s to wait for apiserver health ...
	I0229 02:31:16.748580  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:31:16.748588  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:16.750561  369869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:16.194120  370051 ssh_runner.go:195] Run: systemctl --version
	I0229 02:31:16.220808  370051 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:31:16.386082  370051 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:31:16.393419  370051 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:31:16.393512  370051 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:31:16.418966  370051 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:31:16.419003  370051 start.go:475] detecting cgroup driver to use...
	I0229 02:31:16.419087  370051 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:31:16.444372  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:31:16.466354  370051 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:31:16.466430  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:31:16.488710  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:31:16.509561  370051 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:31:16.651716  370051 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:31:16.840453  370051 docker.go:233] disabling docker service ...
	I0229 02:31:16.840538  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:31:16.869611  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:31:16.890123  370051 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:31:17.047701  370051 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:31:17.225457  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:31:17.248553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:31:17.275486  370051 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 02:31:17.275572  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.290350  370051 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:31:17.290437  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.304093  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.320562  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.339790  370051 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:31:17.356570  370051 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:31:17.371208  370051 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:31:17.371303  370051 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:31:17.390748  370051 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:31:17.405750  370051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:31:17.555023  370051 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:31:17.754419  370051 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:31:17.754508  370051 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:31:17.760190  370051 start.go:543] Will wait 60s for crictl version
	I0229 02:31:17.760245  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:17.765195  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:31:17.815839  370051 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:31:17.815953  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.857470  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.896796  370051 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 02:31:13.906892  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:15.907106  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:16.914513  369591 node_ready.go:49] node "no-preload-247751" has status "Ready":"True"
	I0229 02:31:16.914545  369591 node_ready.go:38] duration metric: took 7.511932085s waiting for node "no-preload-247751" to be "Ready" ...
	I0229 02:31:16.914560  369591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:16.925133  369591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.940518  369591 pod_ready.go:92] pod "coredns-76f75df574-2z5w8" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:16.940553  369591 pod_ready.go:81] duration metric: took 15.382701ms waiting for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.940568  369591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.122967  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Start
	I0229 02:31:16.123141  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring networks are active...
	I0229 02:31:16.124019  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring network default is active
	I0229 02:31:16.124630  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring network mk-embed-certs-915633 is active
	I0229 02:31:16.125118  369508 main.go:141] libmachine: (embed-certs-915633) Getting domain xml...
	I0229 02:31:16.126026  369508 main.go:141] libmachine: (embed-certs-915633) Creating domain...
	I0229 02:31:17.664537  369508 main.go:141] libmachine: (embed-certs-915633) Waiting to get IP...
	I0229 02:31:17.665883  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:17.666462  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:17.666595  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:17.666455  371066 retry.go:31] will retry after 193.172159ms: waiting for machine to come up
	I0229 02:31:17.861043  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:17.861754  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:17.861781  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:17.861651  371066 retry.go:31] will retry after 298.133474ms: waiting for machine to come up
	I0229 02:31:18.161304  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:18.161851  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:18.161886  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:18.161818  371066 retry.go:31] will retry after 402.680342ms: waiting for machine to come up
	I0229 02:31:18.566482  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:18.567145  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:18.567165  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:18.567068  371066 retry.go:31] will retry after 536.886613ms: waiting for machine to come up
	I0229 02:31:19.106090  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:19.106797  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:19.106823  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:19.106714  371066 retry.go:31] will retry after 583.032631ms: waiting for machine to come up
	I0229 02:31:19.691531  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:19.692096  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:19.692127  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:19.692000  371066 retry.go:31] will retry after 780.156818ms: waiting for machine to come up
	I0229 02:31:16.752375  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:31:16.783785  369869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:31:16.816646  369869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:31:16.829430  369869 system_pods.go:59] 8 kube-system pods found
	I0229 02:31:16.829480  369869 system_pods.go:61] "coredns-5dd5756b68-652db" [d989183e-dc0d-4913-8eab-fdfac0cf7ad7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:31:16.829491  369869 system_pods.go:61] "etcd-default-k8s-diff-port-071485" [aba29f47-cf0e-4ee5-8d18-7647b36369e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:31:16.829501  369869 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071485" [26a426b2-d5b7-456e-a733-3317009974ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:31:16.829517  369869 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071485" [a896f9fa-991f-44bb-bd97-02fac3494eea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:31:16.829528  369869 system_pods.go:61] "kube-proxy-g976s" [bc750be0-ae2b-4033-b65b-f1cccaebf32f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:31:16.829536  369869 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071485" [d99d25bf-25f4-4057-aedb-fc5ba797af47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:31:16.829544  369869 system_pods.go:61] "metrics-server-57f55c9bc5-86frx" [0ad81c0d-3f9a-45d8-93d8-bbb9e276d5b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:31:16.829560  369869 system_pods.go:61] "storage-provisioner" [92683c3e-04c1-4cef-988d-3b8beb7d4399] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:31:16.829570  369869 system_pods.go:74] duration metric: took 12.896339ms to wait for pod list to return data ...
	I0229 02:31:16.829584  369869 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:31:16.837494  369869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:31:16.837524  369869 node_conditions.go:123] node cpu capacity is 2
	I0229 02:31:16.837535  369869 node_conditions.go:105] duration metric: took 7.942051ms to run NodePressure ...
	I0229 02:31:16.837560  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:17.293873  369869 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:31:17.300874  369869 kubeadm.go:787] kubelet initialised
	I0229 02:31:17.300907  369869 kubeadm.go:788] duration metric: took 7.00259ms waiting for restarted kubelet to initialise ...
	I0229 02:31:17.300919  369869 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:17.315838  369869 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-652db" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.328228  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "coredns-5dd5756b68-652db" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.328265  369869 pod_ready.go:81] duration metric: took 12.396088ms waiting for pod "coredns-5dd5756b68-652db" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.328278  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "coredns-5dd5756b68-652db" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.328287  369869 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.335458  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.335487  369869 pod_ready.go:81] duration metric: took 7.145351ms waiting for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.335497  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.335505  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.356278  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.356365  369869 pod_ready.go:81] duration metric: took 20.849982ms waiting for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.356385  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.356396  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:19.376170  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:17.898162  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:17.901332  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.901809  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:17.901840  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.902046  370051 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:31:17.907256  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:17.924135  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:31:17.924218  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:17.986923  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:17.986992  370051 ssh_runner.go:195] Run: which lz4
	I0229 02:31:17.992110  370051 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:31:17.997252  370051 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:31:17.997287  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 02:31:20.124958  370051 crio.go:444] Took 2.132885 seconds to copy over tarball
	I0229 02:31:20.125075  370051 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:31:18.948383  369591 pod_ready.go:102] pod "etcd-no-preload-247751" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:20.950330  369591 pod_ready.go:92] pod "etcd-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:20.950359  369591 pod_ready.go:81] duration metric: took 4.009782336s waiting for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:20.950372  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.460878  369591 pod_ready.go:92] pod "kube-apiserver-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.460907  369591 pod_ready.go:81] duration metric: took 1.510525429s waiting for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.460922  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.468463  369591 pod_ready.go:92] pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.468487  369591 pod_ready.go:81] duration metric: took 7.556807ms waiting for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.468497  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.476459  369591 pod_ready.go:92] pod "kube-proxy-cdc4l" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.476488  369591 pod_ready.go:81] duration metric: took 7.983254ms waiting for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.476501  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.482564  369591 pod_ready.go:92] pod "kube-scheduler-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.482589  369591 pod_ready.go:81] duration metric: took 6.080532ms waiting for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.482598  369591 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:20.474186  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:20.474741  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:20.474784  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:20.474647  371066 retry.go:31] will retry after 845.550951ms: waiting for machine to come up
	I0229 02:31:21.322246  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:21.323007  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:21.323031  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:21.322935  371066 retry.go:31] will retry after 1.085864892s: waiting for machine to come up
	I0229 02:31:22.410244  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:22.410735  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:22.410766  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:22.410687  371066 retry.go:31] will retry after 1.587558593s: waiting for machine to come up
	I0229 02:31:24.000303  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:24.000914  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:24.000944  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:24.000828  371066 retry.go:31] will retry after 2.058374822s: waiting for machine to come up
	I0229 02:31:21.867552  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:23.972250  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:23.981829  369869 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:23.981860  369869 pod_ready.go:81] duration metric: took 6.625453699s waiting for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.981875  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g976s" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.994568  369869 pod_ready.go:92] pod "kube-proxy-g976s" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:23.994597  369869 pod_ready.go:81] duration metric: took 12.712769ms waiting for pod "kube-proxy-g976s" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.994609  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:24.002085  369869 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:24.002110  369869 pod_ready.go:81] duration metric: took 7.492788ms waiting for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:24.002133  369869 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.625489  370051 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.500380961s)
	I0229 02:31:23.625526  370051 crio.go:451] Took 3.500531 seconds to extract the tarball
	I0229 02:31:23.625536  370051 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:31:23.671458  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:23.714048  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:23.714087  370051 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:31:23.714189  370051 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.714213  370051 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.714309  370051 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 02:31:23.714424  370051 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.714269  370051 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.714461  370051 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.714519  370051 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.714192  370051 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.716086  370051 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.716076  370051 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716088  370051 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.716143  370051 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.716081  370051 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.716275  370051 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 02:31:23.838722  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.844569  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 02:31:23.853089  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.857738  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.864060  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.865519  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.926256  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.997349  370051 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 02:31:23.997401  370051 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.997463  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.010625  370051 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 02:31:24.010674  370051 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 02:31:24.010722  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083140  370051 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 02:31:24.083203  370051 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 02:31:24.083232  370051 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 02:31:24.083247  370051 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.083266  370051 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.083269  370051 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.083308  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083319  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083364  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083166  370051 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 02:31:24.083426  370051 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.083471  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123878  370051 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 02:31:24.123928  370051 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.123972  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123982  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:24.123973  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 02:31:24.124043  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.124051  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.124097  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.124153  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.152226  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.270585  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 02:31:24.305436  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 02:31:24.305532  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 02:31:24.305621  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 02:31:24.305629  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 02:31:24.305799  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 02:31:24.316950  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 02:31:24.635837  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:24.791670  370051 cache_images.go:92] LoadImages completed in 1.077558745s
	W0229 02:31:24.791798  370051 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0229 02:31:24.791902  370051 ssh_runner.go:195] Run: crio config
	I0229 02:31:24.851132  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:31:24.851164  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:24.851189  370051 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:31:24.851213  370051 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275488 NodeName:old-k8s-version-275488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 02:31:24.851423  370051 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-275488
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.160:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:31:24.851524  370051 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275488 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:31:24.851598  370051 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 02:31:24.864237  370051 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:31:24.864330  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:31:24.879552  370051 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 02:31:24.901027  370051 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:31:24.920638  370051 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 02:31:24.941894  370051 ssh_runner.go:195] Run: grep 192.168.39.160	control-plane.minikube.internal$ /etc/hosts
	I0229 02:31:24.947439  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:24.962396  370051 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488 for IP: 192.168.39.160
	I0229 02:31:24.962435  370051 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:24.962621  370051 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:31:24.962673  370051 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:31:24.962781  370051 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/client.key
	I0229 02:31:24.962851  370051 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key.80b25619
	I0229 02:31:24.962919  370051 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key
	I0229 02:31:24.963087  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:31:24.963126  370051 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:31:24.963138  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:31:24.963185  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:31:24.963213  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:31:24.963245  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:31:24.963296  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:24.963980  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:31:24.996049  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:31:25.030503  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:31:25.057695  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:31:25.091982  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:31:25.126636  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:31:25.156613  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:31:25.186480  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:31:25.221012  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:31:25.254122  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:31:25.282646  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:31:25.312624  370051 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:31:25.335020  370051 ssh_runner.go:195] Run: openssl version
	I0229 02:31:25.342920  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:31:25.355808  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361349  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361433  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.368335  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:31:25.380799  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:31:25.393069  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398466  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398539  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.404776  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:31:25.416735  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:31:25.428884  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434503  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434584  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.441187  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:31:25.453174  370051 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:31:25.458712  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:31:25.466032  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:31:25.473895  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:31:25.482948  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:31:25.491808  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:31:25.499003  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:31:25.506691  370051 kubeadm.go:404] StartCluster: {Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:25.506829  370051 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:31:25.506883  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:25.551867  370051 cri.go:89] found id: ""
	I0229 02:31:25.551970  370051 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:31:25.564446  370051 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:31:25.564476  370051 kubeadm.go:636] restartCluster start
	I0229 02:31:25.564545  370051 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:31:25.576275  370051 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:25.577406  370051 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-275488" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:31:25.578043  370051 kubeconfig.go:146] "old-k8s-version-275488" context is missing from /home/jenkins/minikube-integration/18063-316644/kubeconfig - will repair!
	I0229 02:31:25.578979  370051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:25.580805  370051 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:31:25.592154  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:25.592259  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:25.609268  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:26.092701  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.092827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.108636  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:24.491508  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.492827  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:28.496040  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.062093  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:26.062582  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:26.062612  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:26.062525  371066 retry.go:31] will retry after 2.231071357s: waiting for machine to come up
	I0229 02:31:28.295693  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:28.296180  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:28.296214  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:28.296116  371066 retry.go:31] will retry after 2.376277578s: waiting for machine to come up
	I0229 02:31:26.010834  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:28.031628  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.592320  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.592412  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.606907  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.092891  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.093028  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.112353  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.592956  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.593058  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.612315  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.092611  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.092729  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.108095  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.592592  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.592679  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.612145  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.092605  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.092720  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.113807  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.593002  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.593085  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.609337  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.092667  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.092757  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.112800  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.592328  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.592415  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.610909  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:31.092418  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.092529  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.109490  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.990551  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:32.990785  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:30.675432  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:30.675962  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:30.675995  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:30.675901  371066 retry.go:31] will retry after 4.442717853s: waiting for machine to come up
	I0229 02:31:30.511576  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:32.515611  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:35.010325  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:31.593046  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.593128  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.608148  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.092187  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.092299  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.107573  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.593184  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.593312  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.607993  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.092500  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.092603  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.107359  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.592987  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.593101  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.608041  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.092919  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.093023  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.107597  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.593200  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.593295  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.608100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.092589  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.092683  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.107100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.592815  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.592928  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.610879  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.610916  370051 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:35.610930  370051 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:35.610947  370051 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:35.611032  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:35.660059  370051 cri.go:89] found id: ""
	I0229 02:31:35.660146  370051 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:35.682067  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:35.694455  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:35.694542  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707118  370051 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707149  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:35.834811  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:35.123364  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.123906  369508 main.go:141] libmachine: (embed-certs-915633) Found IP for machine: 192.168.50.218
	I0229 02:31:35.123925  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has current primary IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.123931  369508 main.go:141] libmachine: (embed-certs-915633) Reserving static IP address...
	I0229 02:31:35.124398  369508 main.go:141] libmachine: (embed-certs-915633) Reserved static IP address: 192.168.50.218
	I0229 02:31:35.124423  369508 main.go:141] libmachine: (embed-certs-915633) Waiting for SSH to be available...
	I0229 02:31:35.124441  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "embed-certs-915633", mac: "52:54:00:26:ca:ce", ip: "192.168.50.218"} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.124468  369508 main.go:141] libmachine: (embed-certs-915633) DBG | skip adding static IP to network mk-embed-certs-915633 - found existing host DHCP lease matching {name: "embed-certs-915633", mac: "52:54:00:26:ca:ce", ip: "192.168.50.218"}
	I0229 02:31:35.124487  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Getting to WaitForSSH function...
	I0229 02:31:35.126676  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.127004  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.127035  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.127137  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Using SSH client type: external
	I0229 02:31:35.127168  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa (-rw-------)
	I0229 02:31:35.127199  369508 main.go:141] libmachine: (embed-certs-915633) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:31:35.127213  369508 main.go:141] libmachine: (embed-certs-915633) DBG | About to run SSH command:
	I0229 02:31:35.127224  369508 main.go:141] libmachine: (embed-certs-915633) DBG | exit 0
	I0229 02:31:35.251075  369508 main.go:141] libmachine: (embed-certs-915633) DBG | SSH cmd err, output: <nil>: 
	I0229 02:31:35.251474  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetConfigRaw
	I0229 02:31:35.252256  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:35.254934  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.255350  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.255378  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.255676  369508 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/config.json ...
	I0229 02:31:35.255881  369508 machine.go:88] provisioning docker machine ...
	I0229 02:31:35.255905  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:35.256154  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.256344  369508 buildroot.go:166] provisioning hostname "embed-certs-915633"
	I0229 02:31:35.256369  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.256506  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.258794  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.259163  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.259186  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.259337  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.259551  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.259716  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.259875  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.260066  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.260256  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.260269  369508 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-915633 && echo "embed-certs-915633" | sudo tee /etc/hostname
	I0229 02:31:35.383734  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-915633
	
	I0229 02:31:35.383770  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.386559  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.386913  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.386944  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.387121  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.387359  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.387631  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.387815  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.387979  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.388158  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.388175  369508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-915633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-915633/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-915633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:35.521449  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:35.521490  369508 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:31:35.521530  369508 buildroot.go:174] setting up certificates
	I0229 02:31:35.521544  369508 provision.go:83] configureAuth start
	I0229 02:31:35.521573  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.521923  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:35.524829  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.525193  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.525217  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.525348  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.527582  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.527980  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.528012  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.528164  369508 provision.go:138] copyHostCerts
	I0229 02:31:35.528216  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:31:35.528234  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:31:35.528290  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:31:35.528384  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:31:35.528396  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:31:35.528415  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:31:35.528514  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:31:35.528525  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:31:35.528544  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:31:35.528591  369508 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.embed-certs-915633 san=[192.168.50.218 192.168.50.218 localhost 127.0.0.1 minikube embed-certs-915633]
	I0229 02:31:35.778616  369508 provision.go:172] copyRemoteCerts
	I0229 02:31:35.778679  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:31:35.778706  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.782134  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.782605  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.782640  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.782833  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.783103  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.783305  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.783522  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:35.870506  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:31:35.904595  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:31:35.936515  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:31:35.966505  369508 provision.go:86] duration metric: configureAuth took 444.939951ms
	I0229 02:31:35.966539  369508 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:31:35.966725  369508 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:31:35.966831  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.969731  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.970133  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.970176  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.970402  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.970623  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.970788  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.970968  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.971139  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.971382  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.971401  369508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:31:36.262676  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:31:36.262719  369508 machine.go:91] provisioned docker machine in 1.00682197s
	I0229 02:31:36.262731  369508 start.go:300] post-start starting for "embed-certs-915633" (driver="kvm2")
	I0229 02:31:36.262743  369508 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:31:36.262765  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.263140  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:31:36.263179  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.265718  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.266095  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.266126  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.266278  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.266486  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.266658  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.266795  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.359474  369508 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:31:36.365071  369508 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:31:36.365110  369508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:31:36.365202  369508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:31:36.365279  369508 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:31:36.365395  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:31:36.376823  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:36.406525  369508 start.go:303] post-start completed in 143.75518ms
	I0229 02:31:36.406588  369508 fix.go:56] fixHost completed within 20.310442727s
	I0229 02:31:36.406619  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.409415  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.409840  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.409875  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.410009  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.410214  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.410412  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.410567  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.410715  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:36.410936  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:36.410950  369508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:31:36.520508  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173896.494400897
	
	I0229 02:31:36.520543  369508 fix.go:206] guest clock: 1709173896.494400897
	I0229 02:31:36.520555  369508 fix.go:219] Guest: 2024-02-29 02:31:36.494400897 +0000 UTC Remote: 2024-02-29 02:31:36.406594326 +0000 UTC m=+361.755087901 (delta=87.806571ms)
	I0229 02:31:36.520584  369508 fix.go:190] guest clock delta is within tolerance: 87.806571ms
	I0229 02:31:36.520597  369508 start.go:83] releasing machines lock for "embed-certs-915633", held for 20.424490067s
	I0229 02:31:36.520629  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.520949  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:36.523819  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.524146  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.524185  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.524359  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.524912  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.525109  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.525206  369508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:31:36.525251  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.525332  369508 ssh_runner.go:195] Run: cat /version.json
	I0229 02:31:36.525360  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.528265  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528470  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528614  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.528641  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528826  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.528829  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.528855  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.529047  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.529135  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.529253  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.529321  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.529414  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.529478  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.529556  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.611757  369508 ssh_runner.go:195] Run: systemctl --version
	I0229 02:31:36.638875  369508 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:31:36.786219  369508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:31:36.798964  369508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:31:36.799056  369508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:31:36.817942  369508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:31:36.817975  369508 start.go:475] detecting cgroup driver to use...
	I0229 02:31:36.818086  369508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:31:36.837019  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:31:36.855078  369508 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:31:36.855159  369508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:31:36.873444  369508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:31:36.891708  369508 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:31:37.031928  369508 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:31:37.212859  369508 docker.go:233] disabling docker service ...
	I0229 02:31:37.212960  369508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:31:37.235232  369508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:31:37.253901  369508 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:31:37.401366  369508 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:31:37.530791  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:31:37.547864  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:31:37.570344  369508 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:31:37.570416  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.582275  369508 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:31:37.582345  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.593628  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.605168  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.616567  369508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:31:37.628153  369508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:31:37.638579  369508 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:31:37.638640  369508 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:31:37.652738  369508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:31:37.664118  369508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:31:37.785330  369508 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:31:37.933006  369508 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:31:37.933095  369508 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:31:37.938625  369508 start.go:543] Will wait 60s for crictl version
	I0229 02:31:37.938702  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:31:37.943285  369508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:31:37.984992  369508 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:31:37.985105  369508 ssh_runner.go:195] Run: crio --version
	I0229 02:31:38.018467  369508 ssh_runner.go:195] Run: crio --version
	I0229 02:31:38.051472  369508 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 02:31:34.991345  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:36.991987  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:38.052850  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:38.055688  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:38.055970  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:38.056006  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:38.056253  369508 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 02:31:38.060925  369508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:38.076126  369508 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:31:38.076197  369508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:38.116261  369508 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 02:31:38.116372  369508 ssh_runner.go:195] Run: which lz4
	I0229 02:31:38.121080  369508 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:31:38.125711  369508 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:31:38.125755  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 02:31:37.012008  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:39.018348  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:36.790885  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.042778  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.130251  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.215289  370051 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:37.215384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:37.715589  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.715938  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.215781  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.716505  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.216238  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.716182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.992988  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:41.491712  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:43.492458  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:40.139859  369508 crio.go:444] Took 2.018817 seconds to copy over tarball
	I0229 02:31:40.139953  369508 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:31:43.071745  369508 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.931752333s)
	I0229 02:31:43.071797  369508 crio.go:451] Took 2.931905 seconds to extract the tarball
	I0229 02:31:43.071809  369508 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:31:43.118127  369508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:43.171147  369508 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:31:43.171176  369508 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:31:43.171262  369508 ssh_runner.go:195] Run: crio config
	I0229 02:31:43.232177  369508 cni.go:84] Creating CNI manager for ""
	I0229 02:31:43.232203  369508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:43.232229  369508 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:31:43.232247  369508 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-915633 NodeName:embed-certs-915633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:31:43.232419  369508 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-915633"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:31:43.232519  369508 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-915633 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-915633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:31:43.232596  369508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:31:43.244392  369508 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:31:43.244467  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:31:43.256293  369508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0229 02:31:43.275397  369508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:31:43.295494  369508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0229 02:31:43.316812  369508 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I0229 02:31:43.321496  369508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:43.335055  369508 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633 for IP: 192.168.50.218
	I0229 02:31:43.335092  369508 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:43.335270  369508 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:31:43.335316  369508 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:31:43.335388  369508 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/client.key
	I0229 02:31:43.335442  369508 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.key.cc0da009
	I0229 02:31:43.335475  369508 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.key
	I0229 02:31:43.335584  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:31:43.335610  369508 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:31:43.335619  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:31:43.335642  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:31:43.335673  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:31:43.335710  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:31:43.335779  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:43.336455  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:31:43.364985  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:31:43.394189  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:31:43.424515  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:31:43.456589  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:31:43.486396  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:31:43.516931  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:31:43.546421  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:31:43.578923  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:31:43.608333  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:31:43.637196  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:31:43.667522  369508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:31:43.688266  369508 ssh_runner.go:195] Run: openssl version
	I0229 02:31:43.695616  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:31:43.709892  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.715346  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.715426  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.722688  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:31:43.735866  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:31:43.749967  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.757599  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.757671  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.765157  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:31:43.779671  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:31:43.792900  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.798505  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.798576  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.805192  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:31:43.818233  369508 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:31:43.823681  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:31:43.831016  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:31:43.837899  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:31:43.844802  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:31:43.851881  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:31:43.858689  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:31:43.865749  369508 kubeadm.go:404] StartCluster: {Name:embed-certs-915633 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-915633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:43.865852  369508 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:31:43.865925  369508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:43.906012  369508 cri.go:89] found id: ""
	I0229 02:31:43.906116  369508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:31:43.918241  369508 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:31:43.918265  369508 kubeadm.go:636] restartCluster start
	I0229 02:31:43.918349  369508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:31:43.930524  369508 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:43.931550  369508 kubeconfig.go:92] found "embed-certs-915633" server: "https://192.168.50.218:8443"
	I0229 02:31:43.933612  369508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:31:43.944469  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:43.944519  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:43.958194  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:44.444746  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:44.444840  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:44.458567  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:41.510364  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:43.511424  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:41.216236  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:41.716082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.215537  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.715524  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.215873  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.715634  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.216464  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.715519  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.216430  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.716196  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.990995  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:48.489390  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:44.944934  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.003707  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.018797  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:45.445348  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.445435  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.460199  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:45.944750  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.944879  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.959309  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.445218  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:46.445313  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:46.459195  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.945456  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:46.945538  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:46.959212  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:47.444711  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:47.444819  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:47.459189  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:47.944651  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:47.944726  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:47.958733  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:48.445008  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:48.445100  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:48.460126  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:48.944649  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:48.944731  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:48.959993  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:49.444545  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:49.444628  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:49.458889  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.011657  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:48.508465  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:46.215715  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:46.715657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.216495  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.715491  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.215459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.715556  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.215675  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.716046  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.215993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.715594  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.489578  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:52.990638  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:49.945108  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:49.945265  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:49.960625  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:50.444843  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:50.444923  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:50.459329  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:50.944871  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:50.944963  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:50.959583  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:51.444601  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:51.444704  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:51.462037  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:51.944573  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:51.944658  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:51.958538  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:52.445111  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:52.445269  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:52.462902  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:52.945088  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:52.945182  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:52.960241  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.444649  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:53.444738  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:53.458642  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.945214  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:53.945291  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:53.960552  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.960588  369508 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:53.960600  369508 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:53.960615  369508 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:53.960671  369508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:54.005230  369508 cri.go:89] found id: ""
	I0229 02:31:54.005321  369508 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:54.027544  369508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:54.040517  369508 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:54.040577  369508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:54.051200  369508 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:54.051223  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:54.168817  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:50.509119  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:52.509526  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:54.511540  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:51.215927  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:51.715888  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.215659  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.715769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.216175  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.715755  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.216468  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.216280  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.715924  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.992721  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:57.490570  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:55.091652  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.346578  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.443373  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.542444  369508 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:55.542562  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.042870  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.542972  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.571776  369508 api_server.go:72] duration metric: took 1.029332492s to wait for apiserver process to appear ...
	I0229 02:31:56.571808  369508 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:56.571831  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:56.572606  369508 api_server.go:269] stopped: https://192.168.50.218:8443/healthz: Get "https://192.168.50.218:8443/healthz": dial tcp 192.168.50.218:8443: connect: connection refused
	I0229 02:31:57.072145  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.557011  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:59.557048  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:59.557066  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.609944  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:59.610010  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:59.610028  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.669911  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:59.669955  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:57.010655  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:59.510097  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:00.071971  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:00.084661  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:32:00.084690  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:32:00.572262  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:00.577772  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:32:00.577807  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:32:01.072371  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:01.077306  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0229 02:32:01.084492  369508 api_server.go:141] control plane version: v1.28.4
	I0229 02:32:01.084531  369508 api_server.go:131] duration metric: took 4.512702749s to wait for apiserver health ...
	I0229 02:32:01.084544  369508 cni.go:84] Creating CNI manager for ""
	I0229 02:32:01.084554  369508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:32:01.086337  369508 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:56.215653  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.715898  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.215954  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.216366  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.716093  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.215944  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.715553  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.216341  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.715677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.087584  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:32:01.099724  369508 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:32:01.122381  369508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:32:01.133632  369508 system_pods.go:59] 8 kube-system pods found
	I0229 02:32:01.133674  369508 system_pods.go:61] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:32:01.133684  369508 system_pods.go:61] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:32:01.133697  369508 system_pods.go:61] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:32:01.133710  369508 system_pods.go:61] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:32:01.133720  369508 system_pods.go:61] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:32:01.133728  369508 system_pods.go:61] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:32:01.133738  369508 system_pods.go:61] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:32:01.133746  369508 system_pods.go:61] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:32:01.133755  369508 system_pods.go:74] duration metric: took 11.346225ms to wait for pod list to return data ...
	I0229 02:32:01.133767  369508 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:32:01.138716  369508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:32:01.138746  369508 node_conditions.go:123] node cpu capacity is 2
	I0229 02:32:01.138760  369508 node_conditions.go:105] duration metric: took 4.985648ms to run NodePressure ...
	I0229 02:32:01.138783  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:32:01.368503  369508 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:32:01.373648  369508 kubeadm.go:787] kubelet initialised
	I0229 02:32:01.373669  369508 kubeadm.go:788] duration metric: took 5.137378ms waiting for restarted kubelet to initialise ...
	I0229 02:32:01.373677  369508 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:01.379649  369508 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.384724  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.384750  369508 pod_ready.go:81] duration metric: took 5.071017ms waiting for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.384758  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.384765  369508 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.390019  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "etcd-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.390048  369508 pod_ready.go:81] duration metric: took 5.27491ms waiting for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.390059  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "etcd-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.390067  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.396275  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.396294  369508 pod_ready.go:81] duration metric: took 6.218856ms waiting for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.396302  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.396307  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.525881  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.525914  369508 pod_ready.go:81] duration metric: took 129.596783ms waiting for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.525927  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.525935  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.926806  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-proxy-6tt7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.926843  369508 pod_ready.go:81] duration metric: took 400.889304ms waiting for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.926856  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-proxy-6tt7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.926864  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:02.326588  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.326621  369508 pod_ready.go:81] duration metric: took 399.74816ms waiting for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:02.326633  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.326639  369508 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:02.727730  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.727759  369508 pod_ready.go:81] duration metric: took 401.108694ms waiting for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:02.727769  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.727776  369508 pod_ready.go:38] duration metric: took 1.354090438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:02.727795  369508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:32:02.742069  369508 ops.go:34] apiserver oom_adj: -16
	I0229 02:32:02.742097  369508 kubeadm.go:640] restartCluster took 18.823823408s
	I0229 02:32:02.742107  369508 kubeadm.go:406] StartCluster complete in 18.876382148s
	I0229 02:32:02.742127  369508 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:02.742271  369508 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:32:02.744032  369508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:02.744292  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:32:02.744429  369508 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:32:02.744507  369508 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-915633"
	I0229 02:32:02.744526  369508 addons.go:69] Setting default-storageclass=true in profile "embed-certs-915633"
	I0229 02:32:02.744540  369508 addons.go:69] Setting metrics-server=true in profile "embed-certs-915633"
	I0229 02:32:02.744550  369508 addons.go:234] Setting addon metrics-server=true in "embed-certs-915633"
	I0229 02:32:02.744555  369508 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-915633"
	W0229 02:32:02.744558  369508 addons.go:243] addon metrics-server should already be in state true
	I0229 02:32:02.744619  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.744532  369508 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-915633"
	W0229 02:32:02.744735  369508 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:32:02.744853  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.744682  369508 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:32:02.745085  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745113  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.745121  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745175  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.745339  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745416  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.749865  369508 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-915633" context rescaled to 1 replicas
	I0229 02:32:02.749907  369508 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:32:02.751823  369508 out.go:177] * Verifying Kubernetes components...
	I0229 02:32:02.753296  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:32:02.762688  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0229 02:32:02.763050  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0229 02:32:02.763274  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.763693  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.763872  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.763895  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.763963  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0229 02:32:02.764307  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.764337  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.764554  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.764592  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.764665  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.765103  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.765135  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.765144  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.765170  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.765481  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.765495  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.765863  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.766129  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.769253  369508 addons.go:234] Setting addon default-storageclass=true in "embed-certs-915633"
	W0229 02:32:02.769274  369508 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:32:02.769295  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.769578  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.769607  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.787345  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35577
	I0229 02:32:02.787806  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.788243  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.788266  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.789755  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33629
	I0229 02:32:02.790272  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.790361  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0229 02:32:02.790634  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.790727  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.791027  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.791192  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.791206  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.791367  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.791402  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.791705  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.791924  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.792315  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.792987  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.793026  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.793278  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.795128  369508 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:32:02.794105  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.796451  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:32:02.796472  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:32:02.796496  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.797812  369508 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:59.493919  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:01.989683  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:02.799249  369508 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:32:02.799270  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:32:02.799289  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.800109  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.800960  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.801015  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.801300  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.801496  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.801635  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.801763  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.802278  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.802617  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.802645  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.802836  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.803026  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.803174  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.803390  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.818656  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I0229 02:32:02.819105  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.819606  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.819625  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.820022  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.820366  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.822054  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.822412  369508 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:32:02.822432  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:32:02.822451  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.825579  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.826260  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.826293  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.826463  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.826614  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.826761  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.826954  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.911316  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:32:02.945655  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:32:02.945683  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:32:02.981318  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:32:02.981352  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:32:02.983632  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:32:03.009561  369508 node_ready.go:35] waiting up to 6m0s for node "embed-certs-915633" to be "Ready" ...
	I0229 02:32:03.009586  369508 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:32:03.044265  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:32:03.044293  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:32:03.094073  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:32:04.287008  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.3033415s)
	I0229 02:32:04.287081  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287094  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287375  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.37602435s)
	I0229 02:32:04.287416  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287428  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287440  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287463  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287478  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287487  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287750  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287800  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287828  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287861  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287805  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287914  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287834  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.287774  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.289370  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.289377  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.289397  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.293892  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.293919  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.294180  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.294198  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.294212  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.376595  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.28244915s)
	I0229 02:32:04.376679  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.376710  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.377004  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.377022  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.377031  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.377039  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.377037  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.377275  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.377319  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.377331  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.377348  369508 addons.go:470] Verifying addon metrics-server=true in "embed-certs-915633"
	I0229 02:32:04.380194  369508 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:32:04.381510  369508 addons.go:505] enable addons completed in 1.637082823s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:32:02.010578  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:04.509975  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:01.216197  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.716302  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.216170  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.715615  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.216580  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.716088  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.215743  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.716142  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.216543  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.715853  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.991440  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:05.992389  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:08.491225  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:05.014879  369508 node_ready.go:58] node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:07.518854  369508 node_ready.go:58] node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:07.009085  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:09.009296  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:06.216206  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:06.715748  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.215964  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.716419  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.216034  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.715611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.216207  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.716408  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.216144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.716454  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.491751  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:12.991326  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:10.013574  369508 node_ready.go:49] node "embed-certs-915633" has status "Ready":"True"
	I0229 02:32:10.013605  369508 node_ready.go:38] duration metric: took 7.004009102s waiting for node "embed-certs-915633" to be "Ready" ...
	I0229 02:32:10.013617  369508 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:10.020332  369508 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.025740  369508 pod_ready.go:92] pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:10.025766  369508 pod_ready.go:81] duration metric: took 5.403764ms waiting for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.025778  369508 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.534182  369508 pod_ready.go:92] pod "etcd-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:10.534212  369508 pod_ready.go:81] duration metric: took 508.426559ms waiting for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.534238  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:11.048997  369508 pod_ready.go:92] pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:11.049027  369508 pod_ready.go:81] duration metric: took 514.780048ms waiting for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:11.049040  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:13.056477  369508 pod_ready.go:102] pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:11.010305  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:13.011477  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:11.215611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:11.716198  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.216332  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.716413  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.216407  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.716466  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.216182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.716285  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.215995  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.715613  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.491511  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:17.494485  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:15.056064  369508 pod_ready.go:92] pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.056093  369508 pod_ready.go:81] duration metric: took 4.007044542s waiting for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.056104  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.061418  369508 pod_ready.go:92] pod "kube-proxy-6tt7l" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.061440  369508 pod_ready.go:81] duration metric: took 5.329971ms waiting for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.061451  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.578305  369508 pod_ready.go:92] pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.578332  369508 pod_ready.go:81] duration metric: took 516.873281ms waiting for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.578341  369508 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:17.585624  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:19.586470  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:15.510630  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:18.010381  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:16.215530  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:16.716420  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.216031  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.716303  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.216082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.715523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.216166  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.716503  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.215680  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.715770  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.989766  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.989821  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.586820  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:23.587119  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:20.509895  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:23.010371  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.215523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:21.715617  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.216133  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.716029  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.216141  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.715578  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.215640  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.715601  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.215959  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.716394  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.990493  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:25.990911  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:28.489681  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:26.085933  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:28.086754  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:25.508765  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:27.508956  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:29.512409  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:26.215946  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:26.715834  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.216243  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.715581  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.215521  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.715849  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.716497  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.215657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.715492  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.490400  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:32.990250  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:30.586107  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:33.086852  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:31.518170  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:34.009514  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:31.216322  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:31.716160  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.215557  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.715618  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.215761  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.716216  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.216460  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.716244  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.215551  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.715633  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.990305  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.990956  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:35.585472  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:37.586652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.509112  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:38.509634  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.215910  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:36.716307  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:37.216308  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:37.216404  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:37.262324  370051 cri.go:89] found id: ""
	I0229 02:32:37.262358  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.262370  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:37.262378  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:37.262442  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:37.303758  370051 cri.go:89] found id: ""
	I0229 02:32:37.303790  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.303802  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:37.303809  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:37.303880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:37.349512  370051 cri.go:89] found id: ""
	I0229 02:32:37.349538  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.349546  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:37.349553  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:37.349607  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:37.389630  370051 cri.go:89] found id: ""
	I0229 02:32:37.389657  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.389668  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:37.389676  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:37.389752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:37.435918  370051 cri.go:89] found id: ""
	I0229 02:32:37.435954  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.435967  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:37.435976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:37.436044  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:37.479336  370051 cri.go:89] found id: ""
	I0229 02:32:37.479369  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.479377  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:37.479384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:37.479460  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:37.519944  370051 cri.go:89] found id: ""
	I0229 02:32:37.519979  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.519991  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:37.519999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:37.520071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:37.563848  370051 cri.go:89] found id: ""
	I0229 02:32:37.563875  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.563884  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:37.563895  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:37.563915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:37.607989  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:37.608025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:37.660272  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:37.660324  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:37.676878  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:37.676909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:37.805099  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:37.805132  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:37.805159  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.378467  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:40.393066  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:40.393221  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:40.432592  370051 cri.go:89] found id: ""
	I0229 02:32:40.432619  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.432628  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:40.432634  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:40.432693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:40.473651  370051 cri.go:89] found id: ""
	I0229 02:32:40.473706  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.473716  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:40.473722  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:40.473781  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:40.520262  370051 cri.go:89] found id: ""
	I0229 02:32:40.520292  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.520303  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:40.520312  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:40.520374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:40.560359  370051 cri.go:89] found id: ""
	I0229 02:32:40.560393  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.560402  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:40.560408  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:40.560474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:40.602145  370051 cri.go:89] found id: ""
	I0229 02:32:40.602173  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.602181  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:40.602187  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:40.602266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:40.640744  370051 cri.go:89] found id: ""
	I0229 02:32:40.640778  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.640791  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:40.640799  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:40.640869  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:40.681863  370051 cri.go:89] found id: ""
	I0229 02:32:40.681895  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.681908  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:40.681916  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:40.681985  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:40.725859  370051 cri.go:89] found id: ""
	I0229 02:32:40.725890  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.725899  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:40.725910  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:40.725924  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.794666  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:40.794705  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:40.854173  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:40.854215  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:40.901744  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:40.901786  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:40.925331  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:40.925371  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:41.005785  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:39.491292  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:41.494077  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:40.086540  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:42.584644  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:44.587012  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:41.010764  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:43.510128  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:43.506756  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:43.522038  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:43.522135  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:43.559609  370051 cri.go:89] found id: ""
	I0229 02:32:43.559635  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.559642  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:43.559649  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:43.559707  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:43.609059  370051 cri.go:89] found id: ""
	I0229 02:32:43.609087  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.609096  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:43.609102  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:43.609159  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:43.648988  370051 cri.go:89] found id: ""
	I0229 02:32:43.649018  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.649029  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:43.649037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:43.649104  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:43.690995  370051 cri.go:89] found id: ""
	I0229 02:32:43.691028  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.691042  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:43.691054  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:43.691120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:43.729221  370051 cri.go:89] found id: ""
	I0229 02:32:43.729249  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.729257  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:43.729263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:43.729334  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:43.767141  370051 cri.go:89] found id: ""
	I0229 02:32:43.767174  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.767186  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:43.767194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:43.767266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:43.807926  370051 cri.go:89] found id: ""
	I0229 02:32:43.807962  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.807970  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:43.807976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:43.808029  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:43.857945  370051 cri.go:89] found id: ""
	I0229 02:32:43.857973  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.857981  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:43.857991  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:43.858005  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:43.941290  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:43.941338  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:43.986788  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:43.986823  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:44.037384  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:44.037421  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:44.052668  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:44.052696  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:44.127124  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:43.990179  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:45.990921  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:47.991525  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:47.086821  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:49.585987  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:45.510273  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:48.009067  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:50.011776  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:46.627409  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:46.642306  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:46.642397  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:46.685358  370051 cri.go:89] found id: ""
	I0229 02:32:46.685389  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.685400  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:46.685431  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:46.685493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:46.724996  370051 cri.go:89] found id: ""
	I0229 02:32:46.725026  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.725035  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:46.725041  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:46.725113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:46.765815  370051 cri.go:89] found id: ""
	I0229 02:32:46.765849  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.765857  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:46.765863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:46.765924  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:46.808946  370051 cri.go:89] found id: ""
	I0229 02:32:46.808980  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.808991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:46.809000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:46.809068  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:46.865068  370051 cri.go:89] found id: ""
	I0229 02:32:46.865106  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.865119  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:46.865127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:46.865200  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:46.932233  370051 cri.go:89] found id: ""
	I0229 02:32:46.932260  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.932268  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:46.932275  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:46.932331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:46.985701  370051 cri.go:89] found id: ""
	I0229 02:32:46.985732  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.985744  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:46.985752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:46.985819  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:47.027497  370051 cri.go:89] found id: ""
	I0229 02:32:47.027524  370051 logs.go:276] 0 containers: []
	W0229 02:32:47.027536  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:47.027548  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:47.027565  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:47.075955  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:47.075990  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:47.093922  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:47.093949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:47.165000  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:47.165029  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:47.165046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:47.250161  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:47.250201  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:49.794654  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:49.809706  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:49.809787  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:49.868163  370051 cri.go:89] found id: ""
	I0229 02:32:49.868197  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.868217  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:49.868223  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:49.868277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:49.928462  370051 cri.go:89] found id: ""
	I0229 02:32:49.928495  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.928508  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:49.928516  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:49.928580  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:49.975725  370051 cri.go:89] found id: ""
	I0229 02:32:49.975755  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.975765  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:49.975774  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:49.975849  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:50.017007  370051 cri.go:89] found id: ""
	I0229 02:32:50.017036  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.017046  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:50.017051  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:50.017118  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:50.054522  370051 cri.go:89] found id: ""
	I0229 02:32:50.054551  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.054560  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:50.054566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:50.054620  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:50.096274  370051 cri.go:89] found id: ""
	I0229 02:32:50.096300  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.096308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:50.096319  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:50.096382  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:50.142543  370051 cri.go:89] found id: ""
	I0229 02:32:50.142581  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.142590  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:50.142597  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:50.142667  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:50.182452  370051 cri.go:89] found id: ""
	I0229 02:32:50.182482  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.182492  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:50.182505  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:50.182522  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:50.266311  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:50.266355  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:50.309277  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:50.309322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:50.360492  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:50.360536  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:50.376711  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:50.376744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:50.447128  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:49.992032  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.490801  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:51.586053  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:53.586268  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.510054  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:54.510975  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.947926  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:52.970209  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:52.970317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:53.010840  370051 cri.go:89] found id: ""
	I0229 02:32:53.010868  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.010878  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:53.010886  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:53.010983  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:53.049458  370051 cri.go:89] found id: ""
	I0229 02:32:53.049490  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.049503  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:53.049511  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:53.049578  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:53.088615  370051 cri.go:89] found id: ""
	I0229 02:32:53.088646  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.088656  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:53.088671  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:53.088738  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:53.130176  370051 cri.go:89] found id: ""
	I0229 02:32:53.130210  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.130237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:53.130247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:53.130317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:53.177876  370051 cri.go:89] found id: ""
	I0229 02:32:53.177908  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.177920  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:53.177928  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:53.177991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:53.216036  370051 cri.go:89] found id: ""
	I0229 02:32:53.216065  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.216074  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:53.216080  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:53.216143  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:53.254673  370051 cri.go:89] found id: ""
	I0229 02:32:53.254705  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.254716  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:53.254724  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:53.254785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:53.291508  370051 cri.go:89] found id: ""
	I0229 02:32:53.291539  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.291551  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:53.291564  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:53.291581  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:53.343312  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:53.343354  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:53.359264  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:53.359294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:53.431396  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:53.431428  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:53.431445  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:53.512494  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:53.512529  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:56.057340  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:56.073074  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:56.073158  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:56.111650  370051 cri.go:89] found id: ""
	I0229 02:32:56.111684  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.111704  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:56.111713  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:56.111785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:54.990490  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:56.991005  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:55.587290  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:58.086312  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:57.008288  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:59.011396  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:56.150147  370051 cri.go:89] found id: ""
	I0229 02:32:56.150178  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.150191  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:56.150200  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:56.150280  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:56.192842  370051 cri.go:89] found id: ""
	I0229 02:32:56.192878  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.192890  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:56.192898  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:56.192969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:56.232013  370051 cri.go:89] found id: ""
	I0229 02:32:56.232051  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.232062  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:56.232079  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:56.232151  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:56.273824  370051 cri.go:89] found id: ""
	I0229 02:32:56.273858  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.273871  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:56.273882  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:56.273949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:56.312112  370051 cri.go:89] found id: ""
	I0229 02:32:56.312139  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.312147  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:56.312153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:56.312203  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:56.352558  370051 cri.go:89] found id: ""
	I0229 02:32:56.352585  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.352593  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:56.352600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:56.352666  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:56.397719  370051 cri.go:89] found id: ""
	I0229 02:32:56.397762  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.397775  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:56.397790  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:56.397808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:56.447793  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:56.447831  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:56.463859  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:56.463894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:56.540306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:56.540333  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:56.540347  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:56.633201  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:56.633247  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.207459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:59.222165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:59.222271  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:59.261197  370051 cri.go:89] found id: ""
	I0229 02:32:59.261230  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.261242  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:59.261251  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:59.261338  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:59.300874  370051 cri.go:89] found id: ""
	I0229 02:32:59.300917  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.300940  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:59.300950  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:59.301025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:59.345399  370051 cri.go:89] found id: ""
	I0229 02:32:59.345435  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.345446  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:59.345455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:59.345525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:59.386068  370051 cri.go:89] found id: ""
	I0229 02:32:59.386102  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.386112  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:59.386132  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:59.386184  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:59.436597  370051 cri.go:89] found id: ""
	I0229 02:32:59.436629  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.436641  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:59.436649  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:59.436708  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:59.481417  370051 cri.go:89] found id: ""
	I0229 02:32:59.481446  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.481462  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:59.481469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:59.481535  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:59.527725  370051 cri.go:89] found id: ""
	I0229 02:32:59.527752  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.527763  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:59.527771  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:59.527845  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:59.574502  370051 cri.go:89] found id: ""
	I0229 02:32:59.574535  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.574547  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:59.574561  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:59.574579  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:59.669584  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:59.669630  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.730049  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:59.730096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:59.779562  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:59.779613  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:59.797016  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:59.797046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:59.876438  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:58.991584  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:01.489321  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:03.489615  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:00.585463  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:02.587986  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:04.588479  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:01.509980  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:04.009579  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:02.377144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:02.391585  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:02.391682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:02.432359  370051 cri.go:89] found id: ""
	I0229 02:33:02.432390  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.432399  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:02.432406  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:02.432462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:02.476733  370051 cri.go:89] found id: ""
	I0229 02:33:02.476768  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.476781  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:02.476790  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:02.476856  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:02.521414  370051 cri.go:89] found id: ""
	I0229 02:33:02.521440  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.521448  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:02.521454  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:02.521513  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:02.561663  370051 cri.go:89] found id: ""
	I0229 02:33:02.561690  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.561698  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:02.561704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:02.561755  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:02.611953  370051 cri.go:89] found id: ""
	I0229 02:33:02.611989  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.612002  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:02.612010  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:02.612079  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:02.663254  370051 cri.go:89] found id: ""
	I0229 02:33:02.663282  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.663290  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:02.663297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:02.663348  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:02.721449  370051 cri.go:89] found id: ""
	I0229 02:33:02.721484  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.721497  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:02.721506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:02.721579  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:02.761197  370051 cri.go:89] found id: ""
	I0229 02:33:02.761239  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.761251  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:02.761265  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:02.761282  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:02.810457  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:02.810498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:02.828906  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:02.828940  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:02.911895  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:02.911932  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:02.911945  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:02.995120  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:02.995152  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:05.544629  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:05.559266  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:05.559342  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:05.609673  370051 cri.go:89] found id: ""
	I0229 02:33:05.609706  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.609718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:05.609727  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:05.609795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:05.665161  370051 cri.go:89] found id: ""
	I0229 02:33:05.665192  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.665203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:05.665211  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:05.665282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:05.719923  370051 cri.go:89] found id: ""
	I0229 02:33:05.719949  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.719957  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:05.719963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:05.720025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:05.765189  370051 cri.go:89] found id: ""
	I0229 02:33:05.765224  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.765237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:05.765245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:05.765357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:05.803788  370051 cri.go:89] found id: ""
	I0229 02:33:05.803820  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.803829  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:05.803836  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:05.803909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:05.842152  370051 cri.go:89] found id: ""
	I0229 02:33:05.842178  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.842188  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:05.842197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:05.842278  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:05.885042  370051 cri.go:89] found id: ""
	I0229 02:33:05.885071  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.885084  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:05.885092  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:05.885156  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:05.926032  370051 cri.go:89] found id: ""
	I0229 02:33:05.926069  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.926082  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:05.926096  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:05.926112  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:06.014702  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:06.014744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:06.063510  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:06.063550  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:06.114215  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:06.114272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:06.130132  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:06.130169  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:33:05.490726  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:07.491068  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:07.085225  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:09.087524  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:06.508469  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:08.509399  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	W0229 02:33:06.205692  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:08.706549  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:08.722548  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:08.722614  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:08.768518  370051 cri.go:89] found id: ""
	I0229 02:33:08.768553  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.768564  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:08.768572  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:08.768630  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:08.804600  370051 cri.go:89] found id: ""
	I0229 02:33:08.804630  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.804643  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:08.804651  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:08.804721  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:08.842466  370051 cri.go:89] found id: ""
	I0229 02:33:08.842497  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.842510  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:08.842518  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:08.842589  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:08.878384  370051 cri.go:89] found id: ""
	I0229 02:33:08.878412  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.878421  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:08.878427  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:08.878484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:08.924228  370051 cri.go:89] found id: ""
	I0229 02:33:08.924262  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.924275  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:08.924295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:08.924374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:08.966122  370051 cri.go:89] found id: ""
	I0229 02:33:08.966157  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.966168  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:08.966177  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:08.966254  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:09.011109  370051 cri.go:89] found id: ""
	I0229 02:33:09.011135  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.011144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:09.011152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:09.011217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:09.059716  370051 cri.go:89] found id: ""
	I0229 02:33:09.059749  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.059782  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:09.059795  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:09.059812  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:09.110564  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:09.110599  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:09.126037  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:09.126065  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:09.199827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:09.199858  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:09.199892  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:09.282624  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:09.282661  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:09.990502  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.991783  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.586475  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:13.586740  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:10.511051  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:12.512644  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:15.009478  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.829017  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:11.842826  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:11.842894  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:11.881652  370051 cri.go:89] found id: ""
	I0229 02:33:11.881689  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.881700  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:11.881709  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:11.881773  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:11.919252  370051 cri.go:89] found id: ""
	I0229 02:33:11.919291  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.919302  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:11.919309  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:11.919380  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:11.959145  370051 cri.go:89] found id: ""
	I0229 02:33:11.959175  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.959187  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:11.959196  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:11.959263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:12.002105  370051 cri.go:89] found id: ""
	I0229 02:33:12.002134  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.002145  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:12.002153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:12.002219  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:12.042157  370051 cri.go:89] found id: ""
	I0229 02:33:12.042188  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.042221  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:12.042249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:12.042326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:12.080121  370051 cri.go:89] found id: ""
	I0229 02:33:12.080150  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.080158  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:12.080165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:12.080231  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:12.119259  370051 cri.go:89] found id: ""
	I0229 02:33:12.119286  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.119294  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:12.119301  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:12.119357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:12.160136  370051 cri.go:89] found id: ""
	I0229 02:33:12.160171  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.160182  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:12.160195  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:12.160209  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:12.209770  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:12.209810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:12.226429  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:12.226460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:12.295933  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:12.295966  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:12.295978  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:12.380794  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:12.380843  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:14.971692  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:14.986085  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:14.986162  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:15.024756  370051 cri.go:89] found id: ""
	I0229 02:33:15.024788  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.024801  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:15.024809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:15.024868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:15.065131  370051 cri.go:89] found id: ""
	I0229 02:33:15.065159  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.065172  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:15.065180  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:15.065251  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:15.104744  370051 cri.go:89] found id: ""
	I0229 02:33:15.104775  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.104786  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:15.104794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:15.104858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:15.145710  370051 cri.go:89] found id: ""
	I0229 02:33:15.145737  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.145745  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:15.145752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:15.145803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:15.184908  370051 cri.go:89] found id: ""
	I0229 02:33:15.184933  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.184942  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:15.184951  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:15.185016  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:15.230195  370051 cri.go:89] found id: ""
	I0229 02:33:15.230220  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.230241  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:15.230249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:15.230326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:15.269750  370051 cri.go:89] found id: ""
	I0229 02:33:15.269774  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.269783  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:15.269789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:15.269852  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:15.312331  370051 cri.go:89] found id: ""
	I0229 02:33:15.312360  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.312373  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:15.312387  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:15.312402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:15.363032  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:15.363067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:15.422421  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:15.422463  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:15.445235  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:15.445272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:15.530010  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:15.530047  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:15.530066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:14.489188  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:16.991028  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:16.090733  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:18.587045  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:17.510766  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:20.009379  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:18.116265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:18.130375  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:18.130439  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:18.167740  370051 cri.go:89] found id: ""
	I0229 02:33:18.167767  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.167776  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:18.167782  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:18.167843  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:18.205621  370051 cri.go:89] found id: ""
	I0229 02:33:18.205653  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.205662  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:18.205670  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:18.205725  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:18.246917  370051 cri.go:89] found id: ""
	I0229 02:33:18.246954  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.246975  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:18.246983  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:18.247040  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:18.285087  370051 cri.go:89] found id: ""
	I0229 02:33:18.285114  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.285123  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:18.285130  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:18.285181  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:18.323989  370051 cri.go:89] found id: ""
	I0229 02:33:18.324018  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.324027  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:18.324033  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:18.324094  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:18.372741  370051 cri.go:89] found id: ""
	I0229 02:33:18.372769  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.372779  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:18.372785  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:18.372838  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:18.432846  370051 cri.go:89] found id: ""
	I0229 02:33:18.432888  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.432900  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:18.432908  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:18.432977  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:18.486357  370051 cri.go:89] found id: ""
	I0229 02:33:18.486387  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.486399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:18.486411  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:18.486431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:18.532363  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:18.532402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:18.582035  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:18.582076  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:18.599009  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:18.599050  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:18.673580  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:18.673609  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:18.673625  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:19.490704  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.990251  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.085541  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:23.086148  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:22.009826  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:24.509388  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.259614  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:21.274150  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:21.274250  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:21.311859  370051 cri.go:89] found id: ""
	I0229 02:33:21.311895  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.311908  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:21.311917  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:21.311984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:21.364260  370051 cri.go:89] found id: ""
	I0229 02:33:21.364296  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.364309  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:21.364317  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:21.364391  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:21.424181  370051 cri.go:89] found id: ""
	I0229 02:33:21.424217  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.424229  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:21.424237  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:21.424306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:21.482499  370051 cri.go:89] found id: ""
	I0229 02:33:21.482531  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.482543  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:21.482551  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:21.482621  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:21.523743  370051 cri.go:89] found id: ""
	I0229 02:33:21.523775  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.523785  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:21.523793  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:21.523868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:21.563759  370051 cri.go:89] found id: ""
	I0229 02:33:21.563789  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.563800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:21.563809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:21.563889  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:21.610162  370051 cri.go:89] found id: ""
	I0229 02:33:21.610265  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.610286  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:21.610295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:21.610378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:21.652001  370051 cri.go:89] found id: ""
	I0229 02:33:21.652028  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.652037  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:21.652047  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:21.652060  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:21.704028  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:21.704067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:21.720924  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:21.720956  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:21.798619  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:21.798645  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:21.798664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:21.888445  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:21.888506  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.437647  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:24.459963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:24.460041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:24.503906  370051 cri.go:89] found id: ""
	I0229 02:33:24.503940  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.503950  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:24.503956  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:24.504031  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:24.541893  370051 cri.go:89] found id: ""
	I0229 02:33:24.541919  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.541929  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:24.541935  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:24.541991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:24.584717  370051 cri.go:89] found id: ""
	I0229 02:33:24.584748  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.584760  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:24.584769  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:24.584836  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:24.623334  370051 cri.go:89] found id: ""
	I0229 02:33:24.623362  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.623371  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:24.623378  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:24.623447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:24.665862  370051 cri.go:89] found id: ""
	I0229 02:33:24.665890  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.665902  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:24.665911  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:24.665984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:24.705509  370051 cri.go:89] found id: ""
	I0229 02:33:24.705540  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.705551  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:24.705560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:24.705634  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:24.745348  370051 cri.go:89] found id: ""
	I0229 02:33:24.745389  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.745399  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:24.745406  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:24.745462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:24.785490  370051 cri.go:89] found id: ""
	I0229 02:33:24.785520  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.785529  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:24.785539  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:24.785553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.829556  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:24.829589  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:24.877914  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:24.877949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:24.894590  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:24.894623  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:24.972948  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:24.972981  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:24.972997  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:23.990806  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:26.489823  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:25.586684  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:27.588321  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:26.509932  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:29.010692  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:27.555364  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:27.570747  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:27.570820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:27.609771  370051 cri.go:89] found id: ""
	I0229 02:33:27.609800  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.609807  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:27.609813  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:27.609863  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:27.654316  370051 cri.go:89] found id: ""
	I0229 02:33:27.654347  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.654360  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:27.654376  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:27.654453  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:27.695089  370051 cri.go:89] found id: ""
	I0229 02:33:27.695125  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.695137  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:27.695143  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:27.695199  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:27.733846  370051 cri.go:89] found id: ""
	I0229 02:33:27.733881  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.733893  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:27.733901  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:27.733972  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:27.772906  370051 cri.go:89] found id: ""
	I0229 02:33:27.772940  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.772953  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:27.772961  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:27.773039  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:27.812266  370051 cri.go:89] found id: ""
	I0229 02:33:27.812295  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.812308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:27.812316  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:27.812387  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:27.849272  370051 cri.go:89] found id: ""
	I0229 02:33:27.849305  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.849316  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:27.849324  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:27.849393  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:27.887495  370051 cri.go:89] found id: ""
	I0229 02:33:27.887528  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.887541  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:27.887554  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:27.887569  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:27.972220  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:27.972261  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:28.020757  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:28.020797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:28.070347  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:28.070381  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:28.089905  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:28.089947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:28.183306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:30.683857  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:30.701341  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:30.701443  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:30.741342  370051 cri.go:89] found id: ""
	I0229 02:33:30.741376  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.741387  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:30.741397  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:30.741475  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:30.785372  370051 cri.go:89] found id: ""
	I0229 02:33:30.785415  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.785427  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:30.785435  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:30.785506  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:30.828402  370051 cri.go:89] found id: ""
	I0229 02:33:30.828428  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.828436  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:30.828442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:30.828504  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:30.872656  370051 cri.go:89] found id: ""
	I0229 02:33:30.872684  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.872695  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:30.872704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:30.872770  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:30.918746  370051 cri.go:89] found id: ""
	I0229 02:33:30.918775  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.918786  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:30.918794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:30.918867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:30.956794  370051 cri.go:89] found id: ""
	I0229 02:33:30.956838  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.956852  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:30.956860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:30.956935  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:31.000595  370051 cri.go:89] found id: ""
	I0229 02:33:31.000618  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.000628  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:31.000637  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:31.000699  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:31.039060  370051 cri.go:89] found id: ""
	I0229 02:33:31.039089  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.039100  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:31.039111  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:31.039133  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:31.089919  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:31.089949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:31.110276  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:31.110315  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:33:28.990807  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:30.993882  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:33.489703  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:30.086658  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:32.586407  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:34.588272  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:31.509534  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:33.511710  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	W0229 02:33:31.235760  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:31.235791  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:31.235810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:31.323257  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:31.323322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:33.872956  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:33.887953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:33.888034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:33.927887  370051 cri.go:89] found id: ""
	I0229 02:33:33.927926  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.927938  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:33.927945  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:33.928001  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:33.967301  370051 cri.go:89] found id: ""
	I0229 02:33:33.967333  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.967345  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:33.967356  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:33.967425  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:34.009949  370051 cri.go:89] found id: ""
	I0229 02:33:34.009982  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.009992  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:34.009999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:34.010073  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:34.056197  370051 cri.go:89] found id: ""
	I0229 02:33:34.056224  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.056232  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:34.056239  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:34.056314  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:34.107089  370051 cri.go:89] found id: ""
	I0229 02:33:34.107120  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.107132  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:34.107140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:34.107206  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:34.162822  370051 cri.go:89] found id: ""
	I0229 02:33:34.162856  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.162875  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:34.162884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:34.162961  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:34.209963  370051 cri.go:89] found id: ""
	I0229 02:33:34.209993  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.210001  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:34.210008  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:34.210078  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:34.250688  370051 cri.go:89] found id: ""
	I0229 02:33:34.250726  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.250735  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:34.250754  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:34.250768  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:34.298953  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:34.298993  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:34.314067  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:34.314100  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:34.393515  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:34.393536  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:34.393551  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:34.477034  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:34.477078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:35.990175  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:38.490651  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:37.087261  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:39.588400  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:36.009933  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:38.508929  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:37.025152  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:37.040410  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:37.040491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:37.077922  370051 cri.go:89] found id: ""
	I0229 02:33:37.077953  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.077965  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:37.077973  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:37.078041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:37.137895  370051 cri.go:89] found id: ""
	I0229 02:33:37.137925  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.137938  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:37.137946  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:37.138012  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:37.199291  370051 cri.go:89] found id: ""
	I0229 02:33:37.199324  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.199336  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:37.199344  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:37.199422  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:37.242817  370051 cri.go:89] found id: ""
	I0229 02:33:37.242848  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.242857  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:37.242863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:37.242917  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:37.282171  370051 cri.go:89] found id: ""
	I0229 02:33:37.282196  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.282204  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:37.282211  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:37.282284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:37.328608  370051 cri.go:89] found id: ""
	I0229 02:33:37.328639  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.328647  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:37.328658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:37.328724  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:37.372965  370051 cri.go:89] found id: ""
	I0229 02:33:37.372996  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.373008  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:37.373016  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:37.373091  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:37.417597  370051 cri.go:89] found id: ""
	I0229 02:33:37.417630  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.417642  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:37.417655  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:37.417673  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:37.472023  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:37.472058  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:37.487931  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:37.487961  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:37.568196  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:37.568227  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:37.568245  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:37.658485  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:37.658523  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.203039  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:40.220385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:40.220477  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:40.262962  370051 cri.go:89] found id: ""
	I0229 02:33:40.262993  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.263004  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:40.263016  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:40.263086  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:40.302452  370051 cri.go:89] found id: ""
	I0229 02:33:40.302483  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.302495  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:40.302503  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:40.302560  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:40.342509  370051 cri.go:89] found id: ""
	I0229 02:33:40.342544  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.342557  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:40.342566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:40.342644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:40.385585  370051 cri.go:89] found id: ""
	I0229 02:33:40.385615  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.385629  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:40.385638  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:40.385703  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:40.426839  370051 cri.go:89] found id: ""
	I0229 02:33:40.426874  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.426887  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:40.426896  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:40.426962  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:40.467217  370051 cri.go:89] found id: ""
	I0229 02:33:40.467241  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.467251  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:40.467257  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:40.467332  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:40.513525  370051 cri.go:89] found id: ""
	I0229 02:33:40.513546  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.513553  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:40.513559  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:40.513609  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:40.554187  370051 cri.go:89] found id: ""
	I0229 02:33:40.554256  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.554269  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:40.554282  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:40.554301  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:40.636447  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:40.636477  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:40.636494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:40.716381  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:40.716423  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.761946  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:40.761982  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:40.812828  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:40.812862  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:40.492178  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.991517  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.086413  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:44.586663  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:40.510266  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.510702  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:45.013362  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:43.336139  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:43.352278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:43.352361  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:43.392555  370051 cri.go:89] found id: ""
	I0229 02:33:43.392593  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.392607  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:43.392616  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:43.392689  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:43.438169  370051 cri.go:89] found id: ""
	I0229 02:33:43.438202  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.438216  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:43.438242  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:43.438331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:43.476987  370051 cri.go:89] found id: ""
	I0229 02:33:43.477021  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.477033  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:43.477042  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:43.477109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:43.526728  370051 cri.go:89] found id: ""
	I0229 02:33:43.526758  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.526767  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:43.526778  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:43.526833  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:43.572222  370051 cri.go:89] found id: ""
	I0229 02:33:43.572260  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.572273  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:43.572282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:43.572372  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:43.618650  370051 cri.go:89] found id: ""
	I0229 02:33:43.618679  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.618691  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:43.618698  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:43.618764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:43.658069  370051 cri.go:89] found id: ""
	I0229 02:33:43.658104  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.658116  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:43.658126  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:43.658196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:43.700790  370051 cri.go:89] found id: ""
	I0229 02:33:43.700829  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.700841  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:43.700855  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:43.700874  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:43.753330  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:43.753372  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:43.770261  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:43.770294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:43.842407  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:43.842430  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:43.842447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:43.935427  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:43.935470  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:45.490296  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.490514  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.088903  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:49.585902  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.510105  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:49.511420  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:46.498694  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:46.516463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:46.516541  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:46.554731  370051 cri.go:89] found id: ""
	I0229 02:33:46.554757  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.554766  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:46.554772  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:46.554835  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:46.596851  370051 cri.go:89] found id: ""
	I0229 02:33:46.596892  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.596905  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:46.596912  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:46.596981  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:46.634978  370051 cri.go:89] found id: ""
	I0229 02:33:46.635008  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.635017  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:46.635024  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:46.635089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:46.675302  370051 cri.go:89] found id: ""
	I0229 02:33:46.675334  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.675347  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:46.675355  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:46.675423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:46.717366  370051 cri.go:89] found id: ""
	I0229 02:33:46.717402  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.717413  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:46.717421  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:46.717484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:46.756130  370051 cri.go:89] found id: ""
	I0229 02:33:46.756160  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.756169  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:46.756176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:46.756228  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:46.794283  370051 cri.go:89] found id: ""
	I0229 02:33:46.794312  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.794320  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:46.794328  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:46.794384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:46.836646  370051 cri.go:89] found id: ""
	I0229 02:33:46.836679  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.836691  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:46.836703  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:46.836721  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:46.926532  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:46.926578  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:46.981883  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:46.981915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:47.033571  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:47.033612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:47.049803  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:47.049833  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:47.123389  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:49.623827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:49.638175  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:49.638263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:49.675895  370051 cri.go:89] found id: ""
	I0229 02:33:49.675929  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.675941  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:49.675950  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:49.676009  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:49.720679  370051 cri.go:89] found id: ""
	I0229 02:33:49.720718  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.720730  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:49.720739  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:49.720808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:49.762299  370051 cri.go:89] found id: ""
	I0229 02:33:49.762329  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.762342  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:49.762350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:49.762426  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:49.809330  370051 cri.go:89] found id: ""
	I0229 02:33:49.809364  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.809376  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:49.809391  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:49.809455  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:49.859176  370051 cri.go:89] found id: ""
	I0229 02:33:49.859206  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.859218  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:49.859226  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:49.859292  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:49.914844  370051 cri.go:89] found id: ""
	I0229 02:33:49.914877  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.914890  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:49.914897  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:49.914967  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:49.969640  370051 cri.go:89] found id: ""
	I0229 02:33:49.969667  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.969676  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:49.969682  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:49.969736  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:50.010924  370051 cri.go:89] found id: ""
	I0229 02:33:50.010953  370051 logs.go:276] 0 containers: []
	W0229 02:33:50.010965  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:50.010976  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:50.011002  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:50.089462  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:50.089494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:50.132098  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:50.132129  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:50.182693  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:50.182737  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:50.198209  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:50.198256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:50.281521  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:49.991831  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:52.489891  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:51.586298  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:53.587249  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:51.513176  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:54.010209  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:52.781677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:52.795962  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:52.796055  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:52.833670  370051 cri.go:89] found id: ""
	I0229 02:33:52.833706  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.833718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:52.833728  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:52.833795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:52.889497  370051 cri.go:89] found id: ""
	I0229 02:33:52.889529  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.889539  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:52.889547  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:52.889616  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:52.952880  370051 cri.go:89] found id: ""
	I0229 02:33:52.952915  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.952927  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:52.952935  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:52.953002  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:53.008380  370051 cri.go:89] found id: ""
	I0229 02:33:53.008409  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.008420  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:53.008434  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:53.008502  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:53.047877  370051 cri.go:89] found id: ""
	I0229 02:33:53.047911  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.047922  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:53.047931  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:53.047999  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:53.086080  370051 cri.go:89] found id: ""
	I0229 02:33:53.086107  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.086118  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:53.086127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:53.086193  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:53.128334  370051 cri.go:89] found id: ""
	I0229 02:33:53.128368  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.128378  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:53.128385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:53.128457  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:53.172201  370051 cri.go:89] found id: ""
	I0229 02:33:53.172232  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.172245  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:53.172258  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:53.172275  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:53.222608  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:53.222648  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:53.239888  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:53.239918  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:53.315827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:53.315850  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:53.315864  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:53.395457  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:53.395498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:55.943418  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:55.960562  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:55.960638  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:56.005181  370051 cri.go:89] found id: ""
	I0229 02:33:56.005210  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.005221  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:56.005229  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:56.005293  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:56.046700  370051 cri.go:89] found id: ""
	I0229 02:33:56.046731  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.046743  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:56.046750  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:56.046814  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:56.088459  370051 cri.go:89] found id: ""
	I0229 02:33:56.088486  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.088497  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:56.088505  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:56.088571  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:56.127729  370051 cri.go:89] found id: ""
	I0229 02:33:56.127762  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.127774  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:56.127783  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:56.127862  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:54.491536  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.493973  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.089188  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:58.586570  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.011539  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:58.509708  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.169980  370051 cri.go:89] found id: ""
	I0229 02:33:56.170011  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.170022  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:56.170030  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:56.170098  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:56.210650  370051 cri.go:89] found id: ""
	I0229 02:33:56.210682  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.210694  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:56.210704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:56.210771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:56.247342  370051 cri.go:89] found id: ""
	I0229 02:33:56.247380  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.247391  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:56.247400  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:56.247474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:56.286322  370051 cri.go:89] found id: ""
	I0229 02:33:56.286353  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.286364  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:56.286375  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:56.286393  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:56.335144  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:56.335184  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:56.351322  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:56.351359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:56.424251  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:56.424282  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:56.424299  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:56.506053  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:56.506082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.052805  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:59.067508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:59.067599  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:59.114213  370051 cri.go:89] found id: ""
	I0229 02:33:59.114256  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.114268  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:59.114276  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:59.114327  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:59.161087  370051 cri.go:89] found id: ""
	I0229 02:33:59.161123  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.161136  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:59.161145  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:59.161217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:59.206071  370051 cri.go:89] found id: ""
	I0229 02:33:59.206101  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.206114  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:59.206122  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:59.206196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:59.245152  370051 cri.go:89] found id: ""
	I0229 02:33:59.245179  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.245188  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:59.245194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:59.245247  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:59.286047  370051 cri.go:89] found id: ""
	I0229 02:33:59.286080  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.286092  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:59.286101  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:59.286165  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:59.323171  370051 cri.go:89] found id: ""
	I0229 02:33:59.323203  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.323214  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:59.323222  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:59.323288  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:59.364434  370051 cri.go:89] found id: ""
	I0229 02:33:59.364464  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.364477  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:59.364485  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:59.364554  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:59.405902  370051 cri.go:89] found id: ""
	I0229 02:33:59.405929  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.405938  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:59.405948  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:59.405980  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:59.481810  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:59.481841  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:59.481858  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:59.575726  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:59.575767  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.634808  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:59.634849  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:59.702513  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:59.702552  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:58.991152  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:01.490426  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:00.587747  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:02.594677  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:01.010009  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:03.509687  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:02.219660  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:02.234037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:02.234105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:02.277956  370051 cri.go:89] found id: ""
	I0229 02:34:02.277982  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.277991  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:02.277998  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:02.278071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:02.322832  370051 cri.go:89] found id: ""
	I0229 02:34:02.322856  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.322869  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:02.322878  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:02.322949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:02.368612  370051 cri.go:89] found id: ""
	I0229 02:34:02.368646  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.368659  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:02.368668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:02.368731  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:02.412436  370051 cri.go:89] found id: ""
	I0229 02:34:02.412466  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.412479  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:02.412486  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:02.412544  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:02.448682  370051 cri.go:89] found id: ""
	I0229 02:34:02.448713  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.448724  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:02.448733  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:02.448803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:02.486676  370051 cri.go:89] found id: ""
	I0229 02:34:02.486705  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.486723  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:02.486730  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:02.486795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:02.531814  370051 cri.go:89] found id: ""
	I0229 02:34:02.531841  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.531852  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:02.531860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:02.531934  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:02.569800  370051 cri.go:89] found id: ""
	I0229 02:34:02.569835  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.569845  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:02.569857  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:02.569871  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:02.623903  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:02.623937  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:02.643856  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:02.643884  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:02.735520  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:02.735544  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:02.735563  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:02.816572  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:02.816612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.371459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:05.385179  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:05.385255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:05.424653  370051 cri.go:89] found id: ""
	I0229 02:34:05.424679  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.424687  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:05.424694  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:05.424752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:05.463726  370051 cri.go:89] found id: ""
	I0229 02:34:05.463754  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.463763  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:05.463769  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:05.463823  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:05.510367  370051 cri.go:89] found id: ""
	I0229 02:34:05.510396  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.510407  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:05.510415  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:05.510480  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:05.548421  370051 cri.go:89] found id: ""
	I0229 02:34:05.548445  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.548455  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:05.548461  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:05.548527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:05.588778  370051 cri.go:89] found id: ""
	I0229 02:34:05.588801  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.588809  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:05.588815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:05.588875  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:05.638449  370051 cri.go:89] found id: ""
	I0229 02:34:05.638479  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.638490  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:05.638506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:05.638567  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:05.709921  370051 cri.go:89] found id: ""
	I0229 02:34:05.709950  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.709964  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:05.709972  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:05.710038  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:05.756965  370051 cri.go:89] found id: ""
	I0229 02:34:05.756992  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.757000  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:05.757010  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:05.757025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:05.826878  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:05.826904  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:05.826921  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:05.909205  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:05.909256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.954537  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:05.954594  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:06.004157  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:06.004203  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:03.989381  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.990323  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.491379  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.086296  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:07.586477  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.511758  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.009545  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:10.010247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.522975  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:08.539247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:08.539326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:08.579776  370051 cri.go:89] found id: ""
	I0229 02:34:08.579806  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.579817  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:08.579826  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:08.579890  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:08.628415  370051 cri.go:89] found id: ""
	I0229 02:34:08.628444  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.628456  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:08.628468  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:08.628534  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:08.690499  370051 cri.go:89] found id: ""
	I0229 02:34:08.690530  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.690540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:08.690547  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:08.690613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:08.739755  370051 cri.go:89] found id: ""
	I0229 02:34:08.739788  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.739801  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:08.739809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:08.739906  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:08.781693  370051 cri.go:89] found id: ""
	I0229 02:34:08.781721  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.781733  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:08.781742  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:08.781808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:08.818605  370051 cri.go:89] found id: ""
	I0229 02:34:08.818637  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.818645  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:08.818652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:08.818713  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:08.861533  370051 cri.go:89] found id: ""
	I0229 02:34:08.861559  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.861569  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:08.861578  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:08.861658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:08.902727  370051 cri.go:89] found id: ""
	I0229 02:34:08.902758  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.902771  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:08.902784  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:08.902801  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:08.948527  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:08.948567  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:08.999883  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:08.999916  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:09.015438  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:09.015467  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:09.087965  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:09.087994  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:09.088010  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:10.990135  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.991074  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:10.085517  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.086653  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:14.086817  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.510247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:15.010412  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:11.671443  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:11.702197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:11.702322  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:11.755104  370051 cri.go:89] found id: ""
	I0229 02:34:11.755136  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.755147  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:11.755153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:11.755204  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:11.794190  370051 cri.go:89] found id: ""
	I0229 02:34:11.794218  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.794239  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:11.794247  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:11.794310  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:11.837330  370051 cri.go:89] found id: ""
	I0229 02:34:11.837360  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.837372  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:11.837380  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:11.837447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:11.876682  370051 cri.go:89] found id: ""
	I0229 02:34:11.876716  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.876726  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:11.876734  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:11.876805  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:11.922172  370051 cri.go:89] found id: ""
	I0229 02:34:11.922239  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.922262  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:11.922271  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:11.922341  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:11.962218  370051 cri.go:89] found id: ""
	I0229 02:34:11.962270  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.962283  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:11.962291  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:11.962375  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:12.002075  370051 cri.go:89] found id: ""
	I0229 02:34:12.002101  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.002110  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:12.002117  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:12.002169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:12.043337  370051 cri.go:89] found id: ""
	I0229 02:34:12.043378  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.043399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:12.043412  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:12.043428  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:12.094458  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:12.094491  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:12.112374  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:12.112401  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:12.193665  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:12.193689  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:12.193717  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:12.282510  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:12.282553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:14.828451  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:14.843626  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:14.843690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:14.884181  370051 cri.go:89] found id: ""
	I0229 02:34:14.884214  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.884226  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:14.884235  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:14.884302  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:14.926312  370051 cri.go:89] found id: ""
	I0229 02:34:14.926347  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.926361  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:14.926369  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:14.926436  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:14.969147  370051 cri.go:89] found id: ""
	I0229 02:34:14.969182  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.969195  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:14.969207  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:14.969277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:15.013000  370051 cri.go:89] found id: ""
	I0229 02:34:15.013045  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.013055  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:15.013064  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:15.013120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:15.055811  370051 cri.go:89] found id: ""
	I0229 02:34:15.055849  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.055861  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:15.055869  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:15.055939  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:15.100736  370051 cri.go:89] found id: ""
	I0229 02:34:15.100768  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.100780  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:15.100789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:15.100867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:15.140115  370051 cri.go:89] found id: ""
	I0229 02:34:15.140151  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.140164  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:15.140172  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:15.140239  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:15.183545  370051 cri.go:89] found id: ""
	I0229 02:34:15.183576  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.183588  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:15.183602  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:15.183621  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:15.258646  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:15.258676  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:15.258693  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:15.347035  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:15.347082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:15.407148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:15.407178  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:15.466695  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:15.466741  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:15.490797  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.990851  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:16.585993  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:18.587604  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.509114  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:19.509856  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.989102  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:18.005052  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:18.005126  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:18.044687  370051 cri.go:89] found id: ""
	I0229 02:34:18.044714  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.044725  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:18.044739  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:18.044815  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:18.085904  370051 cri.go:89] found id: ""
	I0229 02:34:18.085934  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.085944  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:18.085952  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:18.086017  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:18.129958  370051 cri.go:89] found id: ""
	I0229 02:34:18.129985  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.129994  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:18.129999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:18.130052  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:18.166942  370051 cri.go:89] found id: ""
	I0229 02:34:18.166979  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.166991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:18.167000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:18.167056  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:18.205297  370051 cri.go:89] found id: ""
	I0229 02:34:18.205324  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.205331  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:18.205337  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:18.205410  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:18.246415  370051 cri.go:89] found id: ""
	I0229 02:34:18.246448  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.246461  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:18.246469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:18.246527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:18.285534  370051 cri.go:89] found id: ""
	I0229 02:34:18.285573  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.285585  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:18.285600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:18.285662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:18.327624  370051 cri.go:89] found id: ""
	I0229 02:34:18.327651  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.327659  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:18.327670  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:18.327684  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:18.383307  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:18.383351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:18.408127  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:18.408162  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:18.502036  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:18.502070  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:18.502093  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:18.582289  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:18.582340  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:20.490582  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:22.990210  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.086446  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:23.586600  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.511411  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:24.009976  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.135649  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:21.149411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:21.149498  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:21.198246  370051 cri.go:89] found id: ""
	I0229 02:34:21.198286  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.198298  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:21.198306  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:21.198378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:21.240168  370051 cri.go:89] found id: ""
	I0229 02:34:21.240195  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.240203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:21.240209  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:21.240275  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:21.281243  370051 cri.go:89] found id: ""
	I0229 02:34:21.281277  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.281288  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:21.281296  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:21.281359  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:21.321573  370051 cri.go:89] found id: ""
	I0229 02:34:21.321609  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.321621  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:21.321629  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:21.321693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:21.375156  370051 cri.go:89] found id: ""
	I0229 02:34:21.375212  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.375226  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:21.375234  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:21.375308  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:21.430450  370051 cri.go:89] found id: ""
	I0229 02:34:21.430487  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.430499  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:21.430508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:21.430576  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:21.475095  370051 cri.go:89] found id: ""
	I0229 02:34:21.475124  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.475135  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:21.475144  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:21.475215  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:21.517378  370051 cri.go:89] found id: ""
	I0229 02:34:21.517403  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.517412  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:21.517424  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:21.517444  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:21.534103  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:21.534147  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:21.608375  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:21.608400  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:21.608412  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:21.691912  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:21.691950  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:21.744366  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:21.744406  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.295384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:24.309456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:24.309539  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:24.370125  370051 cri.go:89] found id: ""
	I0229 02:34:24.370156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.370167  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:24.370175  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:24.370256  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:24.439458  370051 cri.go:89] found id: ""
	I0229 02:34:24.439487  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.439499  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:24.439506  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:24.439639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:24.478070  370051 cri.go:89] found id: ""
	I0229 02:34:24.478105  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.478119  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:24.478127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:24.478194  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:24.517128  370051 cri.go:89] found id: ""
	I0229 02:34:24.517156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.517168  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:24.517176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:24.517243  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:24.555502  370051 cri.go:89] found id: ""
	I0229 02:34:24.555537  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.555549  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:24.555557  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:24.555625  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:24.601261  370051 cri.go:89] found id: ""
	I0229 02:34:24.601295  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.601307  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:24.601315  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:24.601389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:24.639110  370051 cri.go:89] found id: ""
	I0229 02:34:24.639141  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.639153  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:24.639161  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:24.639224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:24.681448  370051 cri.go:89] found id: ""
	I0229 02:34:24.681478  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.681487  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:24.681498  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:24.681517  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.730735  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:24.730775  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:24.746996  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:24.747031  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:24.827581  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:24.827608  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:24.827628  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:24.909551  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:24.909596  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:24.990581  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.489787  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:25.586672  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.586999  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:26.509819  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:29.009014  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.455967  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:27.477411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:27.477487  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:27.523163  370051 cri.go:89] found id: ""
	I0229 02:34:27.523189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.523198  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:27.523203  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:27.523258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:27.562298  370051 cri.go:89] found id: ""
	I0229 02:34:27.562330  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.562343  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:27.562350  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:27.562420  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:27.603506  370051 cri.go:89] found id: ""
	I0229 02:34:27.603532  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.603540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:27.603554  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:27.603619  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:27.646971  370051 cri.go:89] found id: ""
	I0229 02:34:27.647002  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.647014  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:27.647031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:27.647109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:27.685124  370051 cri.go:89] found id: ""
	I0229 02:34:27.685149  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.685160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:27.685169  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:27.685235  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:27.726976  370051 cri.go:89] found id: ""
	I0229 02:34:27.727007  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.727018  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:27.727026  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:27.727089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:27.767159  370051 cri.go:89] found id: ""
	I0229 02:34:27.767189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.767197  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:27.767204  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:27.767272  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:27.810377  370051 cri.go:89] found id: ""
	I0229 02:34:27.810411  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.810420  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:27.810431  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:27.810447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:27.858094  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:27.858136  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:27.874407  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:27.874440  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:27.953065  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:27.953092  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:27.953108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:28.042244  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:28.042278  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:30.588227  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:30.604954  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:30.605037  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:30.642069  370051 cri.go:89] found id: ""
	I0229 02:34:30.642100  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.642108  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:30.642119  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:30.642187  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:30.686212  370051 cri.go:89] found id: ""
	I0229 02:34:30.686264  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.686277  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:30.686285  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:30.686364  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:30.726668  370051 cri.go:89] found id: ""
	I0229 02:34:30.726702  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.726715  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:30.726723  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:30.726788  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:30.766850  370051 cri.go:89] found id: ""
	I0229 02:34:30.766883  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.766895  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:30.766904  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:30.766979  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:30.808972  370051 cri.go:89] found id: ""
	I0229 02:34:30.809002  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.809015  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:30.809023  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:30.809093  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:30.851992  370051 cri.go:89] found id: ""
	I0229 02:34:30.852016  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.852025  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:30.852031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:30.852096  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:30.891100  370051 cri.go:89] found id: ""
	I0229 02:34:30.891132  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.891144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:30.891157  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:30.891227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:30.931740  370051 cri.go:89] found id: ""
	I0229 02:34:30.931768  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.931777  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:30.931787  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:30.931808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:31.010896  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:31.010919  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:31.010936  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:31.094626  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:31.094662  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:29.490211  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.490659  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:30.086898  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:32.587485  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.010003  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:33.510267  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.150765  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:31.150804  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:31.202932  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:31.202976  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:33.723355  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:33.738651  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:33.738753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:33.778255  370051 cri.go:89] found id: ""
	I0229 02:34:33.778287  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.778299  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:33.778307  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:33.778384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:33.818360  370051 cri.go:89] found id: ""
	I0229 02:34:33.818396  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.818406  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:33.818412  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:33.818564  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:33.866781  370051 cri.go:89] found id: ""
	I0229 02:34:33.866814  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.866824  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:33.866831  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:33.866891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:33.910013  370051 cri.go:89] found id: ""
	I0229 02:34:33.910051  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.910063  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:33.910072  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:33.910146  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:33.956068  370051 cri.go:89] found id: ""
	I0229 02:34:33.956098  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.956106  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:33.956113  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:33.956170  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:34.004997  370051 cri.go:89] found id: ""
	I0229 02:34:34.005027  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.005038  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:34.005047  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:34.005113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:34.059266  370051 cri.go:89] found id: ""
	I0229 02:34:34.059293  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.059302  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:34.059307  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:34.059363  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:34.105601  370051 cri.go:89] found id: ""
	I0229 02:34:34.105631  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.105643  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:34.105654  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:34.105669  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:34.208723  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:34.208764  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:34.262105  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:34.262137  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:34.314528  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:34.314571  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:34.332441  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:34.332477  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:34.406303  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:33.990257  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.490844  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:35.085482  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:37.086532  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:39.087022  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.015574  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:38.510064  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.906814  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:36.922297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:36.922377  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:36.967550  370051 cri.go:89] found id: ""
	I0229 02:34:36.967578  370051 logs.go:276] 0 containers: []
	W0229 02:34:36.967589  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:36.967599  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:36.967662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:37.007589  370051 cri.go:89] found id: ""
	I0229 02:34:37.007614  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.007624  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:37.007632  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:37.007706  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:37.048230  370051 cri.go:89] found id: ""
	I0229 02:34:37.048260  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.048273  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:37.048281  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:37.048354  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:37.089329  370051 cri.go:89] found id: ""
	I0229 02:34:37.089355  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.089365  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:37.089373  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:37.089441  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:37.144654  370051 cri.go:89] found id: ""
	I0229 02:34:37.144687  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.144699  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:37.144708  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:37.144778  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:37.203822  370051 cri.go:89] found id: ""
	I0229 02:34:37.203857  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.203868  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:37.203876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:37.203948  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:37.250369  370051 cri.go:89] found id: ""
	I0229 02:34:37.250398  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.250410  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:37.250417  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:37.250490  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:37.290924  370051 cri.go:89] found id: ""
	I0229 02:34:37.290957  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.290969  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:37.290981  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:37.290995  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:37.343878  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:37.343920  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:37.359307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:37.359336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:37.435264  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:37.435292  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:37.435309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:37.518274  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:37.518309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:40.062232  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:40.079883  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:40.079957  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:40.123826  370051 cri.go:89] found id: ""
	I0229 02:34:40.123856  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.123866  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:40.123874  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:40.123943  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:40.190273  370051 cri.go:89] found id: ""
	I0229 02:34:40.190321  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.190332  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:40.190338  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:40.190395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:40.232921  370051 cri.go:89] found id: ""
	I0229 02:34:40.232949  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.232961  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:40.232968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:40.233034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:40.273490  370051 cri.go:89] found id: ""
	I0229 02:34:40.273517  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.273526  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:40.273538  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:40.273594  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:40.317121  370051 cri.go:89] found id: ""
	I0229 02:34:40.317152  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.317163  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:40.317171  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:40.317230  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:40.363347  370051 cri.go:89] found id: ""
	I0229 02:34:40.363380  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.363389  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:40.363396  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:40.363459  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:40.407187  370051 cri.go:89] found id: ""
	I0229 02:34:40.407213  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.407222  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:40.407231  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:40.407282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:40.447185  370051 cri.go:89] found id: ""
	I0229 02:34:40.447218  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.447229  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:40.447242  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:40.447258  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:40.496998  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:40.497029  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:40.512520  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:40.512549  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:40.589150  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:40.589173  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:40.589190  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:40.677054  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:40.677096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:38.991307  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:40.992688  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.490195  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:41.585962  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.586942  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:41.009837  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.510138  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.222265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:43.236567  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:43.236629  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:43.282917  370051 cri.go:89] found id: ""
	I0229 02:34:43.282959  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.282976  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:43.282982  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:43.283049  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:43.329273  370051 cri.go:89] found id: ""
	I0229 02:34:43.329302  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.329313  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:43.329321  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:43.329386  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:43.366696  370051 cri.go:89] found id: ""
	I0229 02:34:43.366723  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.366732  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:43.366739  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:43.366800  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:43.405793  370051 cri.go:89] found id: ""
	I0229 02:34:43.405820  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.405828  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:43.405834  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:43.405888  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:43.442870  370051 cri.go:89] found id: ""
	I0229 02:34:43.442898  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.442906  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:43.442912  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:43.442964  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:43.484581  370051 cri.go:89] found id: ""
	I0229 02:34:43.484615  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.484626  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:43.484635  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:43.484702  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:43.530931  370051 cri.go:89] found id: ""
	I0229 02:34:43.530954  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.530963  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:43.530968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:43.531024  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:43.572810  370051 cri.go:89] found id: ""
	I0229 02:34:43.572838  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.572850  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:43.572867  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:43.572883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:43.622815  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:43.622854  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:43.637972  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:43.638012  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:43.713704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:43.713728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:43.713746  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:43.797178  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:43.797220  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:45.490670  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:47.989828  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:45.587464  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:48.090384  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:46.009454  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:48.010403  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:46.347159  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:46.361601  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:46.361682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:46.399751  370051 cri.go:89] found id: ""
	I0229 02:34:46.399784  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.399795  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:46.399804  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:46.399870  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:46.445367  370051 cri.go:89] found id: ""
	I0229 02:34:46.445398  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.445407  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:46.445413  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:46.445486  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:46.490323  370051 cri.go:89] found id: ""
	I0229 02:34:46.490363  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.490385  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:46.490393  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:46.490473  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:46.531406  370051 cri.go:89] found id: ""
	I0229 02:34:46.531441  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.531450  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:46.531456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:46.531507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:46.572759  370051 cri.go:89] found id: ""
	I0229 02:34:46.572787  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.572795  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:46.572804  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:46.572908  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:46.613055  370051 cri.go:89] found id: ""
	I0229 02:34:46.613083  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.613093  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:46.613099  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:46.613153  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:46.657504  370051 cri.go:89] found id: ""
	I0229 02:34:46.657536  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.657544  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:46.657550  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:46.657605  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:46.698008  370051 cri.go:89] found id: ""
	I0229 02:34:46.698057  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.698068  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:46.698080  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:46.698097  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:46.746648  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:46.746682  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:46.761190  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:46.761219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:46.843379  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:46.843403  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:46.843415  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:46.933493  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:46.933546  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:49.491837  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:49.508647  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:49.508717  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:49.550752  370051 cri.go:89] found id: ""
	I0229 02:34:49.550788  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.550800  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:49.550809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:49.550883  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:49.597623  370051 cri.go:89] found id: ""
	I0229 02:34:49.597663  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.597675  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:49.597683  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:49.597764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:49.635207  370051 cri.go:89] found id: ""
	I0229 02:34:49.635230  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.635238  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:49.635282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:49.635336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:49.674664  370051 cri.go:89] found id: ""
	I0229 02:34:49.674696  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.674708  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:49.674716  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:49.674777  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:49.715391  370051 cri.go:89] found id: ""
	I0229 02:34:49.715420  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.715433  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:49.715442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:49.715497  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:49.753318  370051 cri.go:89] found id: ""
	I0229 02:34:49.753352  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.753373  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:49.753382  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:49.753451  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:49.791342  370051 cri.go:89] found id: ""
	I0229 02:34:49.791369  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.791377  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:49.791384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:49.791456  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:49.838148  370051 cri.go:89] found id: ""
	I0229 02:34:49.838181  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.838191  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:49.838204  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:49.838244  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:49.891532  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:49.891568  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:49.917625  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:49.917664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:50.019436  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:50.019457  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:50.019472  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:50.108302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:50.108349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:49.991272  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.491139  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:50.586652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.586940  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:50.509504  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:53.010818  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.654561  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:52.668331  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:52.668402  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:52.718431  370051 cri.go:89] found id: ""
	I0229 02:34:52.718471  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.718484  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:52.718493  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:52.718551  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:52.757913  370051 cri.go:89] found id: ""
	I0229 02:34:52.757946  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.757957  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:52.757965  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:52.758035  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:52.796792  370051 cri.go:89] found id: ""
	I0229 02:34:52.796821  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.796833  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:52.796842  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:52.796913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:52.832157  370051 cri.go:89] found id: ""
	I0229 02:34:52.832187  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.832196  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:52.832203  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:52.832264  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:52.879170  370051 cri.go:89] found id: ""
	I0229 02:34:52.879197  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.879206  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:52.879212  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:52.879265  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:52.924219  370051 cri.go:89] found id: ""
	I0229 02:34:52.924249  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.924258  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:52.924264  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:52.924318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:52.980422  370051 cri.go:89] found id: ""
	I0229 02:34:52.980450  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.980457  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:52.980463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:52.980525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:53.026393  370051 cri.go:89] found id: ""
	I0229 02:34:53.026418  370051 logs.go:276] 0 containers: []
	W0229 02:34:53.026426  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:53.026436  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:53.026453  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:53.075135  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:53.075174  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:53.092197  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:53.092223  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:53.164397  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:53.164423  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:53.164439  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:53.250310  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:53.250366  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:55.792993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:55.807152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:55.807229  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:55.867791  370051 cri.go:89] found id: ""
	I0229 02:34:55.867821  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.867830  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:55.867847  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:55.867925  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:55.922960  370051 cri.go:89] found id: ""
	I0229 02:34:55.922989  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.923001  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:55.923009  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:55.923076  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:55.972510  370051 cri.go:89] found id: ""
	I0229 02:34:55.972541  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.972552  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:55.972560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:55.972632  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:56.011948  370051 cri.go:89] found id: ""
	I0229 02:34:56.011980  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.011990  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:56.011999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:56.012077  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:56.052624  370051 cri.go:89] found id: ""
	I0229 02:34:56.052653  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.052662  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:56.052668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:56.052722  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:56.089075  370051 cri.go:89] found id: ""
	I0229 02:34:56.089100  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.089108  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:56.089114  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:56.089180  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:56.130369  370051 cri.go:89] found id: ""
	I0229 02:34:56.130403  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.130416  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:56.130424  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:56.130496  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:54.989569  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:56.991424  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:55.085652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:57.585291  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:59.586439  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:55.509734  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:57.510165  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:59.511749  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:56.177812  370051 cri.go:89] found id: ""
	I0229 02:34:56.177843  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.177854  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:56.177875  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:56.177894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:56.224294  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:56.224336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:56.275874  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:56.275909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:56.291172  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:56.291202  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:56.364839  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:56.364870  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:56.364888  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:58.950871  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:58.966327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:58.966389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:59.005914  370051 cri.go:89] found id: ""
	I0229 02:34:59.005952  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.005968  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:59.005976  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:59.006045  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:59.043962  370051 cri.go:89] found id: ""
	I0229 02:34:59.043993  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.044005  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:59.044013  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:59.044167  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:59.089398  370051 cri.go:89] found id: ""
	I0229 02:34:59.089426  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.089434  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:59.089440  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:59.089491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:59.130786  370051 cri.go:89] found id: ""
	I0229 02:34:59.130815  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.130824  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:59.130830  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:59.130909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:59.174807  370051 cri.go:89] found id: ""
	I0229 02:34:59.174836  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.174848  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:59.174855  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:59.174929  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:59.217745  370051 cri.go:89] found id: ""
	I0229 02:34:59.217792  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.217800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:59.217806  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:59.217858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:59.260906  370051 cri.go:89] found id: ""
	I0229 02:34:59.260939  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.260950  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:59.260957  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:59.261025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:59.299114  370051 cri.go:89] found id: ""
	I0229 02:34:59.299140  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.299150  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:59.299161  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:59.299173  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:59.349630  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:59.349672  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:59.365679  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:59.365710  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:59.438234  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:59.438261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:59.438280  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:59.524185  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:59.524219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:58.991975  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:01.489719  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:03.490315  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.087731  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:04.585197  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.008802  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:04.509210  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.068320  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:02.082910  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:02.082988  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:02.122095  370051 cri.go:89] found id: ""
	I0229 02:35:02.122132  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.122145  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:02.122153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:02.122245  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:02.160982  370051 cri.go:89] found id: ""
	I0229 02:35:02.161013  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.161029  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:02.161043  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:02.161108  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:02.200603  370051 cri.go:89] found id: ""
	I0229 02:35:02.200637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.200650  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:02.200658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:02.200746  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:02.243100  370051 cri.go:89] found id: ""
	I0229 02:35:02.243126  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.243134  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:02.243140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:02.243207  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:02.282758  370051 cri.go:89] found id: ""
	I0229 02:35:02.282793  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.282806  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:02.282815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:02.282884  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:02.324402  370051 cri.go:89] found id: ""
	I0229 02:35:02.324434  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.324444  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:02.324455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:02.324520  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:02.368608  370051 cri.go:89] found id: ""
	I0229 02:35:02.368637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.368650  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:02.368658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:02.368726  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:02.411449  370051 cri.go:89] found id: ""
	I0229 02:35:02.411484  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.411497  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:02.411509  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:02.411526  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:02.427942  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:02.427974  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:02.498848  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:02.498884  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:02.498902  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:02.585701  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:02.585749  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:02.642055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:02.642096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.201769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:05.215944  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:05.216020  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:05.254080  370051 cri.go:89] found id: ""
	I0229 02:35:05.254107  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.254121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:05.254128  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:05.254179  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:05.296990  370051 cri.go:89] found id: ""
	I0229 02:35:05.297022  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.297034  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:05.297042  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:05.297111  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:05.336241  370051 cri.go:89] found id: ""
	I0229 02:35:05.336275  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.336290  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:05.336299  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:05.336395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:05.377620  370051 cri.go:89] found id: ""
	I0229 02:35:05.377649  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.377658  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:05.377664  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:05.377712  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:05.416275  370051 cri.go:89] found id: ""
	I0229 02:35:05.416303  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.416311  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:05.416318  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:05.416373  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:05.455375  370051 cri.go:89] found id: ""
	I0229 02:35:05.455412  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.455426  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:05.455436  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:05.455507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:05.495862  370051 cri.go:89] found id: ""
	I0229 02:35:05.495887  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.495897  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:05.495905  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:05.495969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:05.541218  370051 cri.go:89] found id: ""
	I0229 02:35:05.541247  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.541260  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:05.541273  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:05.541288  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:05.629982  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:05.630023  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:05.719026  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:05.719066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.785318  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:05.785359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:05.801181  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:05.801214  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:05.871333  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:05.490857  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:07.991044  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:06.587458  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:09.086313  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:06.510265  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:08.510391  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:08.371982  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:08.386451  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:08.386514  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:08.430045  370051 cri.go:89] found id: ""
	I0229 02:35:08.430077  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.430090  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:08.430099  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:08.430169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:08.470547  370051 cri.go:89] found id: ""
	I0229 02:35:08.470583  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.470596  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:08.470604  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:08.470671  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:08.512637  370051 cri.go:89] found id: ""
	I0229 02:35:08.512676  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.512687  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:08.512695  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:08.512759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:08.556228  370051 cri.go:89] found id: ""
	I0229 02:35:08.556263  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.556271  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:08.556277  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:08.556335  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:08.613838  370051 cri.go:89] found id: ""
	I0229 02:35:08.613868  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.613878  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:08.613884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:08.613940  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:08.686408  370051 cri.go:89] found id: ""
	I0229 02:35:08.686442  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.686454  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:08.686462  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:08.686519  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:08.725665  370051 cri.go:89] found id: ""
	I0229 02:35:08.725697  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.725710  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:08.725719  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:08.725776  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:08.765639  370051 cri.go:89] found id: ""
	I0229 02:35:08.765666  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.765674  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:08.765684  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:08.765695  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:08.813097  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:08.813135  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:08.828880  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:08.828909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:08.903237  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:08.903261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:08.903281  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:08.991710  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:08.991745  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:10.491022  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:12.491159  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.086828  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:13.586274  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.009650  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:13.011571  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.536724  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:11.551614  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:11.551690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:11.593078  370051 cri.go:89] found id: ""
	I0229 02:35:11.593110  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.593121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:11.593129  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:11.593185  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:11.645696  370051 cri.go:89] found id: ""
	I0229 02:35:11.645729  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.645742  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:11.645751  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:11.645820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:11.691181  370051 cri.go:89] found id: ""
	I0229 02:35:11.691213  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.691226  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:11.691245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:11.691318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:11.745906  370051 cri.go:89] found id: ""
	I0229 02:35:11.745933  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.745946  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:11.745953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:11.746019  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:11.784895  370051 cri.go:89] found id: ""
	I0229 02:35:11.784927  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.784940  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:11.784949  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:11.785025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:11.825341  370051 cri.go:89] found id: ""
	I0229 02:35:11.825372  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.825384  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:11.825392  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:11.825464  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:11.862454  370051 cri.go:89] found id: ""
	I0229 02:35:11.862492  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.862505  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:11.862523  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:11.862604  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:11.908424  370051 cri.go:89] found id: ""
	I0229 02:35:11.908450  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.908459  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:11.908469  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:11.908487  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:11.956274  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:11.956313  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:11.972363  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:11.972397  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:12.052030  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:12.052057  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:12.052078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:12.138388  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:12.138431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:14.691474  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:14.724652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:14.724739  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:14.765210  370051 cri.go:89] found id: ""
	I0229 02:35:14.765237  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.765246  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:14.765253  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:14.765306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:14.808226  370051 cri.go:89] found id: ""
	I0229 02:35:14.808258  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.808270  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:14.808287  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:14.808357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:14.847999  370051 cri.go:89] found id: ""
	I0229 02:35:14.848030  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.848041  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:14.848049  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:14.848123  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:14.887221  370051 cri.go:89] found id: ""
	I0229 02:35:14.887248  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.887256  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:14.887263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:14.887339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:14.929905  370051 cri.go:89] found id: ""
	I0229 02:35:14.929933  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.929950  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:14.929956  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:14.930011  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:14.969697  370051 cri.go:89] found id: ""
	I0229 02:35:14.969739  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.969761  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:14.969770  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:14.969837  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:15.013387  370051 cri.go:89] found id: ""
	I0229 02:35:15.013418  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.013429  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:15.013437  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:15.013493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:15.058199  370051 cri.go:89] found id: ""
	I0229 02:35:15.058240  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.058253  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:15.058270  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:15.058287  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:15.110165  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:15.110213  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:15.127417  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:15.127452  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:15.203330  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:15.203370  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:15.203405  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:15.283455  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:15.283501  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:14.991352  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.490127  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:15.586556  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:18.085962  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:15.509530  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.512518  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:20.009873  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.829187  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:17.844678  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:17.844759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:17.885549  370051 cri.go:89] found id: ""
	I0229 02:35:17.885581  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.885594  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:17.885601  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:17.885670  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:17.925652  370051 cri.go:89] found id: ""
	I0229 02:35:17.925679  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.925691  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:17.925699  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:17.925766  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:17.963172  370051 cri.go:89] found id: ""
	I0229 02:35:17.963203  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.963215  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:17.963224  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:17.963282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:18.003528  370051 cri.go:89] found id: ""
	I0229 02:35:18.003560  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.003572  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:18.003579  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:18.003644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:18.046494  370051 cri.go:89] found id: ""
	I0229 02:35:18.046526  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.046537  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:18.046545  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:18.046613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:18.084963  370051 cri.go:89] found id: ""
	I0229 02:35:18.084993  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.085004  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:18.085013  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:18.085074  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:18.125521  370051 cri.go:89] found id: ""
	I0229 02:35:18.125547  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.125556  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:18.125563  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:18.125623  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:18.169963  370051 cri.go:89] found id: ""
	I0229 02:35:18.169995  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.170006  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:18.170020  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:18.170035  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:18.225414  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:18.225460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:18.242069  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:18.242108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:18.312704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:18.312728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:18.312742  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:18.397206  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:18.397249  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:20.968000  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:20.983115  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:20.983196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:21.025710  370051 cri.go:89] found id: ""
	I0229 02:35:21.025735  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.025743  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:21.025749  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:21.025812  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:21.065825  370051 cri.go:89] found id: ""
	I0229 02:35:21.065854  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.065862  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:21.065868  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:21.065928  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:21.104738  370051 cri.go:89] found id: ""
	I0229 02:35:21.104770  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.104782  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:21.104790  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:21.104871  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:19.990622  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.491026  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.491059  369591 pod_ready.go:81] duration metric: took 4m0.008454624s waiting for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	E0229 02:35:22.491069  369591 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:35:22.491077  369591 pod_ready.go:38] duration metric: took 4m5.576507129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:22.491094  369591 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:35:22.491124  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:22.491174  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:22.562384  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:22.562412  369591 cri.go:89] found id: ""
	I0229 02:35:22.562422  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:22.562487  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.567997  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:22.568073  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:22.632786  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:22.632811  369591 cri.go:89] found id: ""
	I0229 02:35:22.632822  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:22.632887  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.637899  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:22.637975  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:22.681988  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:22.682014  369591 cri.go:89] found id: ""
	I0229 02:35:22.682024  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:22.682084  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.687515  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:22.687606  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:22.732907  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:22.732931  369591 cri.go:89] found id: ""
	I0229 02:35:22.732939  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:22.732995  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.737695  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:22.737758  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:22.779316  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:22.779341  369591 cri.go:89] found id: ""
	I0229 02:35:22.779349  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:22.779413  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.786533  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:22.786617  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:22.834391  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:22.834420  369591 cri.go:89] found id: ""
	I0229 02:35:22.834430  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:22.834500  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.839386  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:22.839458  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:22.881275  369591 cri.go:89] found id: ""
	I0229 02:35:22.881304  369591 logs.go:276] 0 containers: []
	W0229 02:35:22.881317  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:22.881326  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:22.881404  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:22.932822  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:22.932846  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:22.932850  369591 cri.go:89] found id: ""
	I0229 02:35:22.932858  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:22.932913  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.938541  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.943263  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:22.943288  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:22.994089  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:22.994122  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:23.051780  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:23.051821  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:23.099220  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:23.099251  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:23.157383  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:23.157429  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:23.206125  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:23.206180  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:23.261950  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:23.261982  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:23.324394  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:23.324427  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:23.400608  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:23.400648  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:20.589079  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:23.088469  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.510074  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:24.002388  369869 pod_ready.go:81] duration metric: took 4m0.000212386s waiting for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" ...
	E0229 02:35:24.002420  369869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:35:24.002439  369869 pod_ready.go:38] duration metric: took 4m6.701505951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:24.002490  369869 kubeadm.go:640] restartCluster took 4m24.423602043s
	W0229 02:35:24.002593  369869 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:35:24.002621  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:35:21.147180  370051 cri.go:89] found id: ""
	I0229 02:35:21.147211  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.147221  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:21.147228  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:21.147284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:21.187240  370051 cri.go:89] found id: ""
	I0229 02:35:21.187275  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.187287  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:21.187295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:21.187389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:21.228873  370051 cri.go:89] found id: ""
	I0229 02:35:21.228899  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.228917  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:21.228924  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:21.228992  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:21.268827  370051 cri.go:89] found id: ""
	I0229 02:35:21.268856  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.268867  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:21.268876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:21.268970  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:21.313253  370051 cri.go:89] found id: ""
	I0229 02:35:21.313288  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.313297  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:21.313307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:21.313328  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:21.448089  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:21.448120  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:21.448146  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:21.539941  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:21.539983  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:21.590148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:21.590186  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:21.647760  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:21.647797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:24.165842  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:24.183263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:24.183345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:24.233173  370051 cri.go:89] found id: ""
	I0229 02:35:24.233208  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.233219  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:24.233228  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:24.233301  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:24.276937  370051 cri.go:89] found id: ""
	I0229 02:35:24.276977  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.276989  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:24.276998  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:24.277066  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:24.314629  370051 cri.go:89] found id: ""
	I0229 02:35:24.314665  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.314678  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:24.314686  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:24.314753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:24.367585  370051 cri.go:89] found id: ""
	I0229 02:35:24.367618  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.367630  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:24.367639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:24.367709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:24.451128  370051 cri.go:89] found id: ""
	I0229 02:35:24.451151  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.451160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:24.451167  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:24.451258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:24.497302  370051 cri.go:89] found id: ""
	I0229 02:35:24.497336  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.497348  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:24.497357  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:24.497431  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:24.544593  370051 cri.go:89] found id: ""
	I0229 02:35:24.544621  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.544632  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:24.544640  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:24.544714  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:24.584570  370051 cri.go:89] found id: ""
	I0229 02:35:24.584601  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.584613  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:24.584626  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:24.584645  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:24.669019  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:24.669044  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:24.669061  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:24.752163  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:24.752205  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:24.811945  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:24.811985  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:24.874832  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:24.874873  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:23.928222  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:23.928275  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:23.983171  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:23.983216  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:23.999343  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:23.999382  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:24.180422  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:24.180476  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:26.745283  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:26.768785  369591 api_server.go:72] duration metric: took 4m17.549714658s to wait for apiserver process to appear ...
	I0229 02:35:26.768823  369591 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:35:26.768885  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:26.768949  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:26.816275  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:26.816303  369591 cri.go:89] found id: ""
	I0229 02:35:26.816314  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:26.816379  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.820985  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:26.821062  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:26.870520  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:26.870545  369591 cri.go:89] found id: ""
	I0229 02:35:26.870555  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:26.870613  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.875785  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:26.875869  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:26.926844  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:26.926884  369591 cri.go:89] found id: ""
	I0229 02:35:26.926895  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:26.926963  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.933667  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:26.933747  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:26.988547  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:26.988575  369591 cri.go:89] found id: ""
	I0229 02:35:26.988584  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:26.988645  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.994520  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:26.994600  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:27.040568  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:27.040602  369591 cri.go:89] found id: ""
	I0229 02:35:27.040612  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:27.040679  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.046103  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:27.046161  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:27.094322  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:27.094345  369591 cri.go:89] found id: ""
	I0229 02:35:27.094357  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:27.094428  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.101702  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:27.101779  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:27.164549  369591 cri.go:89] found id: ""
	I0229 02:35:27.164584  369591 logs.go:276] 0 containers: []
	W0229 02:35:27.164596  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:27.164604  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:27.164674  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:27.219403  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:27.219431  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:27.219436  369591 cri.go:89] found id: ""
	I0229 02:35:27.219447  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:27.219510  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.226705  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.233551  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:27.233576  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:27.281111  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:27.281152  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:27.333686  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:27.333738  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:27.948683  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:27.948736  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:28.018866  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:28.018917  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:28.164820  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:28.164857  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:28.222926  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:28.222963  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:28.265708  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:28.265738  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:28.309311  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:28.309352  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:28.363295  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:28.363341  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:28.384099  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:28.384146  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:28.451988  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:28.452025  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:28.499748  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:28.499783  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:25.586753  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:27.589329  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:27.392846  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:27.419255  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:27.419339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:27.465294  370051 cri.go:89] found id: ""
	I0229 02:35:27.465325  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.465337  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:27.465345  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:27.465417  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:27.533393  370051 cri.go:89] found id: ""
	I0229 02:35:27.533424  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.533433  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:27.533441  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:27.533510  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:27.587195  370051 cri.go:89] found id: ""
	I0229 02:35:27.587221  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.587232  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:27.587240  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:27.587313  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:27.638597  370051 cri.go:89] found id: ""
	I0229 02:35:27.638624  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.638632  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:27.638639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:27.638709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:27.687695  370051 cri.go:89] found id: ""
	I0229 02:35:27.687730  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.687742  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:27.687750  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:27.687825  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:27.732275  370051 cri.go:89] found id: ""
	I0229 02:35:27.732309  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.732320  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:27.732327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:27.732389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:27.783069  370051 cri.go:89] found id: ""
	I0229 02:35:27.783109  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.783122  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:27.783133  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:27.783224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:27.832385  370051 cri.go:89] found id: ""
	I0229 02:35:27.832416  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.832429  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:27.832443  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:27.832460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:27.902610  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:27.902658  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:27.919900  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:27.919947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:28.003313  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:28.003337  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:28.003356  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:28.100814  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:28.100853  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:30.654289  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:30.683056  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:30.683141  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:30.734678  370051 cri.go:89] found id: ""
	I0229 02:35:30.734704  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.734712  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:30.734719  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:30.734771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:30.780792  370051 cri.go:89] found id: ""
	I0229 02:35:30.780821  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.780830  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:30.780837  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:30.780904  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:30.827244  370051 cri.go:89] found id: ""
	I0229 02:35:30.827269  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.827278  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:30.827285  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:30.827336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:30.871305  370051 cri.go:89] found id: ""
	I0229 02:35:30.871333  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.871342  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:30.871348  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:30.871423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:30.910095  370051 cri.go:89] found id: ""
	I0229 02:35:30.910121  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.910130  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:30.910136  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:30.910188  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:30.955234  370051 cri.go:89] found id: ""
	I0229 02:35:30.955261  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.955271  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:30.955278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:30.955345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:30.996555  370051 cri.go:89] found id: ""
	I0229 02:35:30.996589  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.996602  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:30.996611  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:30.996687  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:31.036424  370051 cri.go:89] found id: ""
	I0229 02:35:31.036454  370051 logs.go:276] 0 containers: []
	W0229 02:35:31.036464  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:31.036474  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:31.036488  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:31.107928  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:31.107987  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:31.125268  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:31.125303  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:31.053142  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:35:31.060477  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0229 02:35:31.062106  369591 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:35:31.062143  369591 api_server.go:131] duration metric: took 4.2933111s to wait for apiserver health ...
	I0229 02:35:31.062154  369591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:35:31.062189  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:31.062278  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:31.119877  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:31.119905  369591 cri.go:89] found id: ""
	I0229 02:35:31.119915  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:31.119981  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.125569  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:31.125648  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:31.193662  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:31.193693  369591 cri.go:89] found id: ""
	I0229 02:35:31.193704  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:31.193762  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.199267  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:31.199365  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:31.251832  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:31.251862  369591 cri.go:89] found id: ""
	I0229 02:35:31.251873  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:31.251935  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.258374  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:31.258477  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:31.309718  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:31.309745  369591 cri.go:89] found id: ""
	I0229 02:35:31.309753  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:31.309804  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.314949  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:31.315025  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:31.367936  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:31.367960  369591 cri.go:89] found id: ""
	I0229 02:35:31.367970  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:31.368038  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.373072  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:31.373137  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:31.420362  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:31.420390  369591 cri.go:89] found id: ""
	I0229 02:35:31.420402  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:31.420470  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.427151  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:31.427221  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:31.482289  369591 cri.go:89] found id: ""
	I0229 02:35:31.482321  369591 logs.go:276] 0 containers: []
	W0229 02:35:31.482333  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:31.482342  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:31.482405  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:31.526713  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:31.526738  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:31.526744  369591 cri.go:89] found id: ""
	I0229 02:35:31.526755  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:31.526807  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.531874  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.536727  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:31.536758  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:31.555901  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:31.555943  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:31.689587  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:31.689629  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:31.737625  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:31.737669  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:31.781015  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:31.781050  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:31.824727  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:31.824757  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:31.866867  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:31.866897  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:31.920324  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:31.920375  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:31.962783  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:31.962815  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:32.003525  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:32.003557  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:32.061377  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:32.061417  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:32.454041  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:32.454097  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:32.498969  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:32.499006  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:30.086688  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:32.087795  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:34.585435  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:35.060469  369591 system_pods.go:59] 8 kube-system pods found
	I0229 02:35:35.060503  369591 system_pods.go:61] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running
	I0229 02:35:35.060509  369591 system_pods.go:61] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running
	I0229 02:35:35.060516  369591 system_pods.go:61] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running
	I0229 02:35:35.060521  369591 system_pods.go:61] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running
	I0229 02:35:35.060525  369591 system_pods.go:61] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running
	I0229 02:35:35.060530  369591 system_pods.go:61] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running
	I0229 02:35:35.060538  369591 system_pods.go:61] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:35:35.060543  369591 system_pods.go:61] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running
	I0229 02:35:35.060553  369591 system_pods.go:74] duration metric: took 3.99838967s to wait for pod list to return data ...
	I0229 02:35:35.060563  369591 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:35:35.063638  369591 default_sa.go:45] found service account: "default"
	I0229 02:35:35.063665  369591 default_sa.go:55] duration metric: took 3.094531ms for default service account to be created ...
	I0229 02:35:35.063676  369591 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:35:35.071344  369591 system_pods.go:86] 8 kube-system pods found
	I0229 02:35:35.071366  369591 system_pods.go:89] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running
	I0229 02:35:35.071371  369591 system_pods.go:89] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running
	I0229 02:35:35.071375  369591 system_pods.go:89] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running
	I0229 02:35:35.071380  369591 system_pods.go:89] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running
	I0229 02:35:35.071385  369591 system_pods.go:89] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running
	I0229 02:35:35.071389  369591 system_pods.go:89] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running
	I0229 02:35:35.071397  369591 system_pods.go:89] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:35:35.071408  369591 system_pods.go:89] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running
	I0229 02:35:35.071420  369591 system_pods.go:126] duration metric: took 7.737446ms to wait for k8s-apps to be running ...
	I0229 02:35:35.071433  369591 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:35:35.071482  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:35.091472  369591 system_svc.go:56] duration metric: took 20.031453ms WaitForService to wait for kubelet.
	I0229 02:35:35.091504  369591 kubeadm.go:581] duration metric: took 4m25.872454283s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:35:35.091523  369591 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:35:35.095487  369591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:35:35.095509  369591 node_conditions.go:123] node cpu capacity is 2
	I0229 02:35:35.095546  369591 node_conditions.go:105] duration metric: took 4.018229ms to run NodePressure ...
	I0229 02:35:35.095567  369591 start.go:228] waiting for startup goroutines ...
	I0229 02:35:35.095580  369591 start.go:233] waiting for cluster config update ...
	I0229 02:35:35.095594  369591 start.go:242] writing updated cluster config ...
	I0229 02:35:35.095888  369591 ssh_runner.go:195] Run: rm -f paused
	I0229 02:35:35.154197  369591 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 02:35:35.156089  369591 out.go:177] * Done! kubectl is now configured to use "no-preload-247751" cluster and "default" namespace by default
	W0229 02:35:31.217691  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:31.217717  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:31.217740  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:31.313847  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:31.313883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:33.861648  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:33.876887  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:33.876954  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:33.921545  370051 cri.go:89] found id: ""
	I0229 02:35:33.921577  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.921588  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:33.921597  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:33.921658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:33.972558  370051 cri.go:89] found id: ""
	I0229 02:35:33.972584  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.972592  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:33.972599  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:33.972662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:34.020821  370051 cri.go:89] found id: ""
	I0229 02:35:34.020852  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.020862  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:34.020873  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:34.020937  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:34.064076  370051 cri.go:89] found id: ""
	I0229 02:35:34.064110  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.064121  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:34.064129  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:34.064191  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:34.108523  370051 cri.go:89] found id: ""
	I0229 02:35:34.108557  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.108568  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:34.108576  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:34.108639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:34.149444  370051 cri.go:89] found id: ""
	I0229 02:35:34.149468  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.149478  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:34.149487  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:34.149562  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:34.193780  370051 cri.go:89] found id: ""
	I0229 02:35:34.193805  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.193814  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:34.193820  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:34.193913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:34.237088  370051 cri.go:89] found id: ""
	I0229 02:35:34.237118  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.237127  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:34.237137  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:34.237151  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:34.281055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:34.281091  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:34.333886  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:34.333925  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:34.353163  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:34.353204  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:34.465925  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:34.465951  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:34.465969  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:36.587119  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:39.086456  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:37.049957  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:37.064297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:37.064384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:37.105669  370051 cri.go:89] found id: ""
	I0229 02:35:37.105703  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.105711  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:37.105720  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:37.105790  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:37.143753  370051 cri.go:89] found id: ""
	I0229 02:35:37.143788  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.143799  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:37.143808  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:37.143880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:37.180126  370051 cri.go:89] found id: ""
	I0229 02:35:37.180157  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.180166  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:37.180173  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:37.180227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:37.221135  370051 cri.go:89] found id: ""
	I0229 02:35:37.221173  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.221185  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:37.221193  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:37.221261  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:37.258888  370051 cri.go:89] found id: ""
	I0229 02:35:37.258920  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.258932  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:37.258940  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:37.259005  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:37.300970  370051 cri.go:89] found id: ""
	I0229 02:35:37.300998  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.301010  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:37.301018  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:37.301105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:37.349797  370051 cri.go:89] found id: ""
	I0229 02:35:37.349829  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.349841  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:37.349850  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:37.349916  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:37.408726  370051 cri.go:89] found id: ""
	I0229 02:35:37.408762  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.408773  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:37.408787  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:37.408805  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:37.462030  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:37.462064  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:37.477836  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:37.477868  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:37.553886  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:37.553924  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:37.553941  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:37.644637  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:37.644683  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:40.197937  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:40.212830  370051 kubeadm.go:640] restartCluster took 4m14.648338345s
	W0229 02:35:40.212984  370051 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 02:35:40.213021  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:35:40.673169  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:40.690108  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:35:40.702424  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:35:40.713782  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:35:40.713832  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:35:40.775345  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:35:40.775527  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:35:40.929045  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:35:40.929185  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:35:40.929310  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:35:41.154311  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:35:41.154449  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:35:41.162905  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:35:41.317651  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:35:41.319260  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:35:41.319358  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:35:41.319458  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:35:41.319564  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:35:41.319675  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:35:41.319772  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:35:41.319857  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:35:41.319963  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:35:41.320066  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:35:41.320166  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:35:41.320289  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:35:41.320357  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:35:41.320439  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:35:41.457291  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:35:41.599703  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:35:41.766344  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:35:41.939397  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:35:41.940740  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:35:41.090698  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:43.585822  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:41.942544  370051 out.go:204]   - Booting up control plane ...
	I0229 02:35:41.942656  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:35:41.946949  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:35:41.949540  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:35:41.950426  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:35:41.953310  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:35:45.586855  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:48.085961  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:50.585602  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:52.587992  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:55.085046  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:57.086710  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:59.590441  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:57.264698  369869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.262039409s)
	I0229 02:35:57.264826  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:57.285615  369869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:35:57.297607  369869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:35:57.309412  369869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:35:57.309471  369869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:35:57.540175  369869 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:36:02.086317  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:04.587625  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:06.714158  369869 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:36:06.714249  369869 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:36:06.714325  369869 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:36:06.714490  369869 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:36:06.714633  369869 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:36:06.714742  369869 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:36:06.716059  369869 out.go:204]   - Generating certificates and keys ...
	I0229 02:36:06.716160  369869 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:36:06.716250  369869 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:36:06.716357  369869 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:36:06.716434  369869 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:36:06.716508  369869 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:36:06.716572  369869 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:36:06.716649  369869 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:36:06.716722  369869 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:36:06.716824  369869 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:36:06.716952  369869 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:36:06.717008  369869 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:36:06.717080  369869 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:36:06.717147  369869 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:36:06.717221  369869 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:36:06.717298  369869 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:36:06.717367  369869 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:36:06.717474  369869 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:36:06.717559  369869 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:36:06.718770  369869 out.go:204]   - Booting up control plane ...
	I0229 02:36:06.718866  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:36:06.718983  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:36:06.719074  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:36:06.719230  369869 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:36:06.719364  369869 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:36:06.719431  369869 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:36:06.719628  369869 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:36:06.719749  369869 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503520 seconds
	I0229 02:36:06.719906  369869 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:36:06.720060  369869 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:36:06.720126  369869 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:36:06.720344  369869 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-071485 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:36:06.720433  369869 kubeadm.go:322] [bootstrap-token] Using token: oueq3v.8ghuyl6sece1tffl
	I0229 02:36:06.721973  369869 out.go:204]   - Configuring RBAC rules ...
	I0229 02:36:06.722107  369869 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:36:06.722252  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:36:06.722444  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:36:06.722643  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:36:06.722793  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:36:06.722937  369869 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:36:06.723081  369869 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:36:06.723119  369869 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:36:06.723188  369869 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:36:06.723198  369869 kubeadm.go:322] 
	I0229 02:36:06.723285  369869 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:36:06.723310  369869 kubeadm.go:322] 
	I0229 02:36:06.723426  369869 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:36:06.723436  369869 kubeadm.go:322] 
	I0229 02:36:06.723467  369869 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:36:06.723556  369869 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:36:06.723637  369869 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:36:06.723646  369869 kubeadm.go:322] 
	I0229 02:36:06.723713  369869 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:36:06.723722  369869 kubeadm.go:322] 
	I0229 02:36:06.723799  369869 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:36:06.723809  369869 kubeadm.go:322] 
	I0229 02:36:06.723869  369869 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:36:06.723979  369869 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:36:06.724073  369869 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:36:06.724083  369869 kubeadm.go:322] 
	I0229 02:36:06.724178  369869 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:36:06.724269  369869 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:36:06.724279  369869 kubeadm.go:322] 
	I0229 02:36:06.724389  369869 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token oueq3v.8ghuyl6sece1tffl \
	I0229 02:36:06.724520  369869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 \
	I0229 02:36:06.724552  369869 kubeadm.go:322] 	--control-plane 
	I0229 02:36:06.724560  369869 kubeadm.go:322] 
	I0229 02:36:06.724665  369869 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:36:06.724675  369869 kubeadm.go:322] 
	I0229 02:36:06.724767  369869 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token oueq3v.8ghuyl6sece1tffl \
	I0229 02:36:06.724923  369869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 
	I0229 02:36:06.724941  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:36:06.724952  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:36:06.726566  369869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:36:07.088398  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:09.587442  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:06.727880  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:36:06.786343  369869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:36:06.842349  369869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:36:06.842420  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=default-k8s-diff-port-071485 minikube.k8s.io/updated_at=2024_02_29T02_36_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:06.842428  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:07.196763  369869 ops.go:34] apiserver oom_adj: -16
	I0229 02:36:07.196958  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:07.696991  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:08.197336  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:08.697155  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:09.197955  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:09.697107  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:10.197816  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.085528  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:14.085852  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:10.697486  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:11.197744  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:11.697179  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.197614  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.697015  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:13.197983  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:13.697315  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:14.196982  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:14.698012  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:15.197896  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:15.697895  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:16.197062  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:16.697819  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:17.197222  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:17.697031  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.197683  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.697094  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.870924  369869 kubeadm.go:1088] duration metric: took 12.028572011s to wait for elevateKubeSystemPrivileges.
	I0229 02:36:18.870961  369869 kubeadm.go:406] StartCluster complete in 5m19.353203226s
	I0229 02:36:18.870986  369869 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:36:18.871077  369869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:36:18.873654  369869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:36:18.873954  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:36:18.874041  369869 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:36:18.874118  369869 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874130  369869 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874142  369869 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.874149  369869 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:36:18.874152  369869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-071485"
	I0229 02:36:18.874201  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.874256  369869 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:36:18.874341  369869 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874359  369869 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.874367  369869 addons.go:243] addon metrics-server should already be in state true
	I0229 02:36:18.874422  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.874613  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874637  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.874613  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874691  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.874811  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874846  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.892207  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0229 02:36:18.892260  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0229 02:36:18.892967  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.892986  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.893508  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.893528  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.893680  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.893700  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.893936  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.894102  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.894143  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0229 02:36:18.894331  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.894582  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.894594  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.894613  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.895109  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.895143  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.895508  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.896106  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.896142  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.898127  369869 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.898143  369869 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:36:18.898168  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.898482  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.898516  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.917303  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37069
	I0229 02:36:18.917472  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I0229 02:36:18.917747  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.917894  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.918493  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.918510  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.918654  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.918665  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.919012  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.919077  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.919229  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.919754  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.921030  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.922677  369869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:36:18.921622  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.923872  369869 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:36:18.923899  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:36:18.923919  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.925237  369869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:36:18.926153  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:36:18.924603  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0229 02:36:18.926269  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:36:18.926303  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.927739  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.928184  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.928277  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.928299  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.930032  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.930057  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.930386  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.930456  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.930614  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.930723  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.930914  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.931014  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:18.931133  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.931185  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.931533  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.931553  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.931576  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.931737  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.932033  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.932190  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:18.948311  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0229 02:36:18.949328  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.949793  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.949819  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.950313  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.950529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.952381  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.952660  369869 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:36:18.952673  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:36:18.952689  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.956332  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.956779  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.956808  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.957117  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.957313  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.957425  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.957485  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:19.128114  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:36:19.141619  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:36:19.141649  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:36:19.169945  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:36:19.187099  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:36:19.187124  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:36:19.211358  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:36:19.289856  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:36:19.289880  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:36:19.398720  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:36:19.414512  369869 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-071485" context rescaled to 1 replicas
	I0229 02:36:19.414562  369869 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.233 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:36:19.416389  369869 out.go:177] * Verifying Kubernetes components...
	I0229 02:36:15.586606  369508 pod_ready.go:81] duration metric: took 4m0.008250092s waiting for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	E0229 02:36:15.586638  369508 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:36:15.586648  369508 pod_ready.go:38] duration metric: took 4m5.573018241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:36:15.586669  369508 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:36:15.586707  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:15.586771  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:15.644937  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:15.644969  369508 cri.go:89] found id: ""
	I0229 02:36:15.644980  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:15.645054  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.653058  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:15.653137  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:15.709225  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:15.709254  369508 cri.go:89] found id: ""
	I0229 02:36:15.709264  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:15.709333  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.715304  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:15.715391  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:15.769593  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:15.769627  369508 cri.go:89] found id: ""
	I0229 02:36:15.769637  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:15.769702  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.775157  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:15.775230  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:15.820002  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:15.820030  369508 cri.go:89] found id: ""
	I0229 02:36:15.820040  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:15.820105  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.827058  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:15.827122  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:15.875030  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:15.875063  369508 cri.go:89] found id: ""
	I0229 02:36:15.875074  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:15.875142  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.880489  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:15.880555  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:15.929452  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:15.929476  369508 cri.go:89] found id: ""
	I0229 02:36:15.929484  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:15.929545  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.934321  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:15.934396  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:15.981960  369508 cri.go:89] found id: ""
	I0229 02:36:15.981997  369508 logs.go:276] 0 containers: []
	W0229 02:36:15.982006  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:15.982014  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:15.982077  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:16.034169  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:16.034196  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:16.034201  369508 cri.go:89] found id: ""
	I0229 02:36:16.034210  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:16.034281  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:16.039463  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:16.044719  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:16.044748  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:16.111048  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:16.111084  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:16.278784  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:16.278832  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:16.333048  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:16.333085  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:16.376514  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:16.376555  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:16.420840  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:16.420944  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:16.468273  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:16.468308  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:16.526001  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:16.526043  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:16.569084  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:16.569120  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:16.609818  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:16.609847  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:16.660979  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:16.661019  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:16.677397  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:16.677432  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:16.732421  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:16.732464  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:19.417788  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:36:21.277741  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.107753576s)
	I0229 02:36:21.277802  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.277815  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.277840  369869 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.066425449s)
	I0229 02:36:21.277873  369869 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 02:36:21.277840  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.149690589s)
	I0229 02:36:21.277908  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.277918  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278277  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.278323  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278331  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.278339  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.278351  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278445  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278458  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.278465  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.278474  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278519  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.278592  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278603  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.280452  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.280470  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.280482  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.300880  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.300907  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.301193  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.301217  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.572633  369869 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.154816183s)
	I0229 02:36:21.572676  369869 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-071485" to be "Ready" ...
	I0229 02:36:21.572635  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.173852857s)
	I0229 02:36:21.572814  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.572842  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.573153  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.573207  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.573215  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.573228  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.573236  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.573538  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.573575  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.573587  369869 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-071485"
	I0229 02:36:21.575111  369869 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:36:19.738493  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:36:19.758171  369508 api_server.go:72] duration metric: took 4m17.008228834s to wait for apiserver process to appear ...
	I0229 02:36:19.758199  369508 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:36:19.758281  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:19.758349  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:19.811042  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:19.811071  369508 cri.go:89] found id: ""
	I0229 02:36:19.811082  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:19.811145  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.817952  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:19.818034  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:19.871006  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:19.871033  369508 cri.go:89] found id: ""
	I0229 02:36:19.871043  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:19.871109  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.877440  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:19.877512  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:19.928043  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:19.928071  369508 cri.go:89] found id: ""
	I0229 02:36:19.928081  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:19.928142  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.935299  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:19.935363  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:19.977360  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:19.977391  369508 cri.go:89] found id: ""
	I0229 02:36:19.977402  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:19.977482  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.982361  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:19.982442  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:20.025903  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:20.025931  369508 cri.go:89] found id: ""
	I0229 02:36:20.025941  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:20.026012  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.031390  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:20.031477  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:20.080768  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:20.080792  369508 cri.go:89] found id: ""
	I0229 02:36:20.080800  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:20.080864  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.087322  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:20.087388  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:20.139067  369508 cri.go:89] found id: ""
	I0229 02:36:20.139111  369508 logs.go:276] 0 containers: []
	W0229 02:36:20.139124  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:20.139132  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:20.139195  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:20.193052  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:20.193085  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:20.193091  369508 cri.go:89] found id: ""
	I0229 02:36:20.193101  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:20.193174  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.199740  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.205385  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:20.205414  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:20.360843  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:20.360894  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:20.411077  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:20.411113  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:20.459855  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:20.459910  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:20.517056  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:20.517101  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:20.568151  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:20.568185  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:20.637131  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:20.637165  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:21.144933  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:21.144980  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:21.206565  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:21.206607  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:21.257071  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:21.257118  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:21.315541  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:21.315589  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:21.358630  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:21.358665  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:21.398170  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:21.398201  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:23.914059  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:36:23.923854  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0229 02:36:23.926443  369508 api_server.go:141] control plane version: v1.28.4
	I0229 02:36:23.926466  369508 api_server.go:131] duration metric: took 4.168260413s to wait for apiserver health ...
	I0229 02:36:23.926475  369508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:36:23.926506  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:23.926566  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:24.013825  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:24.013849  369508 cri.go:89] found id: ""
	I0229 02:36:24.013857  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:24.013913  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.019432  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:24.019506  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:24.078857  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:24.078877  369508 cri.go:89] found id: ""
	I0229 02:36:24.078885  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:24.078945  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.083761  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:24.083822  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:24.133681  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:24.133707  369508 cri.go:89] found id: ""
	I0229 02:36:24.133717  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:24.133779  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.139165  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:24.139228  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:24.185863  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:24.185883  369508 cri.go:89] found id: ""
	I0229 02:36:24.185892  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:24.185939  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.191094  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:24.191164  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:24.232922  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:24.232953  369508 cri.go:89] found id: ""
	I0229 02:36:24.232963  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:24.233031  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.238154  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:24.238252  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:24.280735  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:24.280760  369508 cri.go:89] found id: ""
	I0229 02:36:24.280769  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:24.280842  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.285497  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:24.285558  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:24.324979  369508 cri.go:89] found id: ""
	I0229 02:36:24.325007  369508 logs.go:276] 0 containers: []
	W0229 02:36:24.325016  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:24.325022  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:24.325085  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:24.370875  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:24.370908  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:24.370912  369508 cri.go:89] found id: ""
	I0229 02:36:24.370919  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:24.370973  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.378247  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.382856  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:24.382899  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:24.430889  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:24.430919  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:24.470370  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:24.470407  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:21.576300  369869 addons.go:505] enable addons completed in 2.702258052s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:36:21.582468  369869 node_ready.go:49] node "default-k8s-diff-port-071485" has status "Ready":"True"
	I0229 02:36:21.582494  369869 node_ready.go:38] duration metric: took 9.804213ms waiting for node "default-k8s-diff-port-071485" to be "Ready" ...
	I0229 02:36:21.582506  369869 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:36:21.608694  369869 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.125662  369869 pod_ready.go:92] pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.125695  369869 pod_ready.go:81] duration metric: took 1.51697387s waiting for pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.125707  369869 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.141831  369869 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.141855  369869 pod_ready.go:81] duration metric: took 16.140002ms waiting for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.141864  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.154216  369869 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.154261  369869 pod_ready.go:81] duration metric: took 12.389751ms waiting for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.154276  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.166057  369869 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.166085  369869 pod_ready.go:81] duration metric: took 11.798242ms waiting for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.166098  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gr44w" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.179414  369869 pod_ready.go:92] pod "kube-proxy-gr44w" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.179437  369869 pod_ready.go:81] duration metric: took 13.331411ms waiting for pod "kube-proxy-gr44w" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.179447  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.576569  369869 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.576597  369869 pod_ready.go:81] duration metric: took 397.142516ms waiting for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.576611  369869 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:21.953781  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:36:21.954431  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:21.954685  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:24.880947  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:24.880985  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:24.939045  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:24.939079  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:24.987109  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:24.987144  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:25.049095  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:25.049131  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:25.091654  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:25.091686  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:25.153281  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:25.153326  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:25.169544  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:25.169575  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:25.294469  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:25.294504  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:25.346867  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:25.346900  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:25.388876  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:25.388921  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:27.937848  369508 system_pods.go:59] 8 kube-system pods found
	I0229 02:36:27.937878  369508 system_pods.go:61] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running
	I0229 02:36:27.937883  369508 system_pods.go:61] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running
	I0229 02:36:27.937888  369508 system_pods.go:61] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running
	I0229 02:36:27.937891  369508 system_pods.go:61] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running
	I0229 02:36:27.937894  369508 system_pods.go:61] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:36:27.937898  369508 system_pods.go:61] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running
	I0229 02:36:27.937903  369508 system_pods.go:61] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:36:27.937908  369508 system_pods.go:61] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:36:27.937922  369508 system_pods.go:74] duration metric: took 4.011440564s to wait for pod list to return data ...
	I0229 02:36:27.937933  369508 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:36:27.940602  369508 default_sa.go:45] found service account: "default"
	I0229 02:36:27.940623  369508 default_sa.go:55] duration metric: took 2.681589ms for default service account to be created ...
	I0229 02:36:27.940632  369508 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:36:27.947433  369508 system_pods.go:86] 8 kube-system pods found
	I0229 02:36:27.947455  369508 system_pods.go:89] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running
	I0229 02:36:27.947466  369508 system_pods.go:89] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running
	I0229 02:36:27.947472  369508 system_pods.go:89] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running
	I0229 02:36:27.947482  369508 system_pods.go:89] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running
	I0229 02:36:27.947491  369508 system_pods.go:89] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:36:27.947497  369508 system_pods.go:89] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running
	I0229 02:36:27.947508  369508 system_pods.go:89] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:36:27.947518  369508 system_pods.go:89] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:36:27.947531  369508 system_pods.go:126] duration metric: took 6.892538ms to wait for k8s-apps to be running ...
	I0229 02:36:27.947539  369508 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:36:27.947591  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:36:27.965730  369508 system_svc.go:56] duration metric: took 18.181663ms WaitForService to wait for kubelet.
	I0229 02:36:27.965756  369508 kubeadm.go:581] duration metric: took 4m25.215820473s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:36:27.965780  369508 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:36:27.970094  369508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:36:27.970123  369508 node_conditions.go:123] node cpu capacity is 2
	I0229 02:36:27.970138  369508 node_conditions.go:105] duration metric: took 4.347423ms to run NodePressure ...
	I0229 02:36:27.970152  369508 start.go:228] waiting for startup goroutines ...
	I0229 02:36:27.970162  369508 start.go:233] waiting for cluster config update ...
	I0229 02:36:27.970175  369508 start.go:242] writing updated cluster config ...
	I0229 02:36:27.970529  369508 ssh_runner.go:195] Run: rm -f paused
	I0229 02:36:28.020686  369508 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:36:28.022730  369508 out.go:177] * Done! kubectl is now configured to use "embed-certs-915633" cluster and "default" namespace by default
	I0229 02:36:25.585985  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:28.085278  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:26.954801  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:26.955093  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:30.583462  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:32.584198  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:34.585129  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:37.085551  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:39.584450  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:36.955344  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:36.955543  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:41.585000  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:44.083919  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:46.085694  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:48.583474  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:50.584026  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:53.084622  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:55.084729  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:57.084941  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:59.586329  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:56.957911  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:56.958178  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:02.085189  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:04.085672  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:06.586906  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:09.085130  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:11.583811  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:13.585179  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:16.083670  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:18.084884  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:20.584395  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:22.585487  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:24.586088  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:26.586608  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:29.084644  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:31.585292  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:34.083690  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:36.959509  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:37:36.959795  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:36.959812  370051 kubeadm.go:322] 
	I0229 02:37:36.959848  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:37:36.959887  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:37:36.959893  370051 kubeadm.go:322] 
	I0229 02:37:36.959937  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:37:36.959991  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:37:36.960142  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:37:36.960167  370051 kubeadm.go:322] 
	I0229 02:37:36.960282  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:37:36.960318  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:37:36.960362  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:37:36.960371  370051 kubeadm.go:322] 
	I0229 02:37:36.960482  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:37:36.960617  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:37:36.960756  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:37:36.960839  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:37:36.960951  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:37:36.961015  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:37:36.961366  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:37:36.961507  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:37:36.961616  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 02:37:36.961763  370051 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:37:36.961835  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:37:37.427665  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:37:37.443045  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:37:37.456937  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:37:37.456979  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:37:37.529093  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:37:37.529246  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:37:37.670260  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:37:37.670417  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:37:37.670548  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:37:37.904220  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:37:37.905569  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:37:37.914919  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:37:38.070911  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:37:38.072738  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:37:38.072860  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:37:38.072951  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:37:38.073049  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:37:38.073132  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:37:38.073230  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:37:38.073299  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:37:38.073376  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:37:38.073458  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:37:38.073566  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:37:38.073680  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:37:38.073720  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:37:38.073794  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:37:38.209805  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:37:38.305550  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:37:38.464715  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:37:38.623139  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:37:38.624364  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:37:36.084556  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:38.086561  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:38.625883  370051 out.go:204]   - Booting up control plane ...
	I0229 02:37:38.626039  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:37:38.630668  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:37:38.631740  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:37:38.632687  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:37:38.636043  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:37:40.583589  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:42.583968  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:44.584409  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:46.586413  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:49.084223  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:51.584770  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:53.584871  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:55.585299  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:58.084753  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:00.584432  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:03.085511  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:05.585519  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:08.085774  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:10.087984  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:12.584744  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:15.085757  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:17.584807  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:19.588130  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:18.637746  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:38:18.638616  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:18.638883  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:22.084442  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:24.085227  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:23.639374  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:23.639613  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:26.087774  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:28.584872  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:30.587375  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:33.085060  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:35.086106  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:33.640169  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:33.640468  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:37.584670  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:40.085797  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:42.585365  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:44.587079  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:46.590638  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:49.086500  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:51.584286  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:53.587405  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:53.640871  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:53.641147  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:56.084551  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:58.085668  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:00.086247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:02.588854  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:05.085163  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:07.090885  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:09.583687  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:11.585184  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:14.085800  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:16.086643  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:18.584073  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:21.084992  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:23.585496  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:25.586111  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:28.086464  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:33.642813  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:39:33.643083  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:39:33.643099  370051 kubeadm.go:322] 
	I0229 02:39:33.643153  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:39:33.643206  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:39:33.643213  370051 kubeadm.go:322] 
	I0229 02:39:33.643252  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:39:33.643296  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:39:33.643443  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:39:33.643455  370051 kubeadm.go:322] 
	I0229 02:39:33.643605  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:39:33.643655  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:39:33.643700  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:39:33.643714  370051 kubeadm.go:322] 
	I0229 02:39:33.643871  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:39:33.644040  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:39:33.644193  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:39:33.644272  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:39:33.644371  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:39:33.644412  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:39:33.644855  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:39:33.644972  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:39:33.645065  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:39:33.645132  370051 kubeadm.go:406] StartCluster complete in 8m8.138449101s
	I0229 02:39:33.645178  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:39:33.645255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:39:33.699121  370051 cri.go:89] found id: ""
	I0229 02:39:33.699154  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.699166  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:39:33.699174  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:39:33.699240  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:39:33.747229  370051 cri.go:89] found id: ""
	I0229 02:39:33.747260  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.747272  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:39:33.747279  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:39:33.747349  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:39:33.789303  370051 cri.go:89] found id: ""
	I0229 02:39:33.789334  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.789343  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:39:33.789350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:39:33.789413  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:39:33.832769  370051 cri.go:89] found id: ""
	I0229 02:39:33.832801  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.832814  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:39:33.832824  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:39:33.832891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:39:33.881508  370051 cri.go:89] found id: ""
	I0229 02:39:33.881543  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.881554  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:39:33.881571  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:39:33.881635  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:39:33.941691  370051 cri.go:89] found id: ""
	I0229 02:39:33.941728  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.941740  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:39:33.941749  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:39:33.941822  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:39:33.990639  370051 cri.go:89] found id: ""
	I0229 02:39:33.990681  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.990704  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:39:33.990713  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:39:33.990774  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:39:34.038426  370051 cri.go:89] found id: ""
	I0229 02:39:34.038460  370051 logs.go:276] 0 containers: []
	W0229 02:39:34.038470  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:39:34.038480  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:39:34.038497  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:39:34.054571  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:39:34.054604  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:39:34.131297  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:39:34.131323  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:39:34.131337  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:39:34.232302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:39:34.232349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:39:34.283314  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:39:34.283351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:39:34.336858  370051 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:39:34.336920  370051 out.go:239] * 
	W0229 02:39:34.336985  370051 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.337006  370051 out.go:239] * 
	W0229 02:39:34.337787  370051 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:39:34.340744  370051 out.go:177] 
	W0229 02:39:34.342096  370051 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.342137  370051 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:39:34.342160  370051 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:39:34.343540  370051 out.go:177] 
	I0229 02:39:30.584963  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:32.585599  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:34.588073  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.292749586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174376292715943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16f7ad19-f6cc-4d50-afc9-fac1f31337de name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.293556776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=304532c6-b206-49a7-9937-c8bc69ede4e1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.293642175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=304532c6-b206-49a7-9937-c8bc69ede4e1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.293683425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=304532c6-b206-49a7-9937-c8bc69ede4e1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.331859776Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc870ae8-f3b8-45e9-8d98-d5a150518426 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.331953188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc870ae8-f3b8-45e9-8d98-d5a150518426 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.333473874Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee6adb53-e367-43cc-8ea7-b73b401bad82 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.333850204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174376333824918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee6adb53-e367-43cc-8ea7-b73b401bad82 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.334498476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4669b602-e4c4-4ccf-99ed-19310c7f86f5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.334554593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4669b602-e4c4-4ccf-99ed-19310c7f86f5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.334586573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4669b602-e4c4-4ccf-99ed-19310c7f86f5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.372114475Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3210937d-fe6b-4cd4-9c45-6e7b86a0dd77 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.372278092Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3210937d-fe6b-4cd4-9c45-6e7b86a0dd77 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.373782024Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=204bddd6-e8d6-4a59-9d8a-016c7f579501 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.374302936Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174376374273999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=204bddd6-e8d6-4a59-9d8a-016c7f579501 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.375121849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a7eb046-4bf4-43d5-9520-91a83095375e name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.375236088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a7eb046-4bf4-43d5-9520-91a83095375e name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.375278812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5a7eb046-4bf4-43d5-9520-91a83095375e name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.412532528Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9186ddc-7a2e-446b-b2eb-3717421c6c8a name=/runtime.v1.RuntimeService/Version
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.412690719Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9186ddc-7a2e-446b-b2eb-3717421c6c8a name=/runtime.v1.RuntimeService/Version
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.415060222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a05ea0b7-8aae-4aad-b22a-b533bd209937 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.415764948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174376415719710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a05ea0b7-8aae-4aad-b22a-b533bd209937 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.416785787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2dc6e937-fbd4-430c-bd36-4453319723e8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.416854084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2dc6e937-fbd4-430c-bd36-4453319723e8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:39:36 old-k8s-version-275488 crio[644]: time="2024-02-29 02:39:36.416900045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2dc6e937-fbd4-430c-bd36-4453319723e8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 02:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052077] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045395] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.718888] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Feb29 02:31] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.696519] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.748716] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.071940] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.086978] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.246454] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.137859] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.350900] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[ +17.818498] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.668154] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[Feb29 02:35] systemd-fstab-generator[8056]: Ignoring "noauto" option for root device
	[  +0.077012] kauditd_printk_skb: 15 callbacks suppressed
	[Feb29 02:37] systemd-fstab-generator[9745]: Ignoring "noauto" option for root device
	[  +0.066300] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:39:36 up 8 min,  0 users,  load average: 0.12, 0.36, 0.21
	Linux old-k8s-version-275488 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 02:39:34 old-k8s-version-275488 kubelet[11413]: F0229 02:39:34.740683   11413 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:39:34 old-k8s-version-275488 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:39:34 old-k8s-version-275488 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:39:35 old-k8s-version-275488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Feb 29 02:39:35 old-k8s-version-275488 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:39:35 old-k8s-version-275488 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:39:35 old-k8s-version-275488 kubelet[11433]: I0229 02:39:35.462522   11433 server.go:410] Version: v1.16.0
	Feb 29 02:39:35 old-k8s-version-275488 kubelet[11433]: I0229 02:39:35.462934   11433 plugins.go:100] No cloud provider specified.
	Feb 29 02:39:35 old-k8s-version-275488 kubelet[11433]: I0229 02:39:35.462959   11433 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:39:35 old-k8s-version-275488 kubelet[11433]: I0229 02:39:35.469023   11433 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:39:35 old-k8s-version-275488 kubelet[11433]: W0229 02:39:35.471654   11433 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:39:35 old-k8s-version-275488 kubelet[11433]: F0229 02:39:35.471829   11433 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:39:35 old-k8s-version-275488 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:39:35 old-k8s-version-275488 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:39:36 old-k8s-version-275488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 157.
	Feb 29 02:39:36 old-k8s-version-275488 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:39:36 old-k8s-version-275488 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:39:36 old-k8s-version-275488 kubelet[11453]: I0229 02:39:36.222066   11453 server.go:410] Version: v1.16.0
	Feb 29 02:39:36 old-k8s-version-275488 kubelet[11453]: I0229 02:39:36.222324   11453 plugins.go:100] No cloud provider specified.
	Feb 29 02:39:36 old-k8s-version-275488 kubelet[11453]: I0229 02:39:36.222337   11453 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:39:36 old-k8s-version-275488 kubelet[11453]: I0229 02:39:36.224710   11453 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:39:36 old-k8s-version-275488 kubelet[11453]: W0229 02:39:36.225637   11453 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:39:36 old-k8s-version-275488 kubelet[11453]: F0229 02:39:36.225714   11453 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:39:36 old-k8s-version-275488 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:39:36 old-k8s-version-275488 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275488 -n old-k8s-version-275488
E0229 02:39:37.822241  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:39:37.825490  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275488 -n old-k8s-version-275488: exit status 2 (250.153241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-275488" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (781.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-247751 -n no-preload-247751
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-02-29 02:44:35.767746236 +0000 UTC m=+5633.460893151
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247751 -n no-preload-247751
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-247751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-247751 logs -n 25: (2.312291932s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-117441 sudo cat                              | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo find                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo crio                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-117441                                       | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	| delete  | -p                                                     | disable-driver-mounts-542968 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | disable-driver-mounts-542968                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:23 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-915633            | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247751             | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071485  | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275488        | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-915633                 | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247751                  | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071485       | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:40 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275488             | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:26:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:26:36.132854  370051 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:26:36.133389  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133407  370051 out.go:304] Setting ErrFile to fd 2...
	I0229 02:26:36.133414  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133912  370051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:26:36.134959  370051 out.go:298] Setting JSON to false
	I0229 02:26:36.135907  370051 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7739,"bootTime":1709165857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:26:36.135982  370051 start.go:139] virtualization: kvm guest
	I0229 02:26:36.137916  370051 out.go:177] * [old-k8s-version-275488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:26:36.139510  370051 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:26:36.139543  370051 notify.go:220] Checking for updates...
	I0229 02:26:36.141206  370051 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:26:36.142776  370051 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:26:36.143982  370051 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:26:36.145097  370051 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:26:36.146170  370051 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:26:36.147751  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:26:36.148198  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.148298  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.163969  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0229 02:26:36.164373  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.164977  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.165003  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.165394  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.165584  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.167312  370051 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 02:26:36.168337  370051 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:26:36.168641  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.168683  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.184089  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0229 02:26:36.184605  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.185181  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.185210  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.185551  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.185723  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.222261  370051 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 02:26:36.223363  370051 start.go:299] selected driver: kvm2
	I0229 02:26:36.223374  370051 start.go:903] validating driver "kvm2" against &{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.223487  370051 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:26:36.224130  370051 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.224195  370051 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:26:36.239302  370051 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:26:36.239664  370051 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:26:36.239741  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:26:36.239755  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:26:36.239765  370051 start_flags.go:323] config:
	{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.239908  370051 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.241466  370051 out.go:177] * Starting control plane node old-k8s-version-275488 in cluster old-k8s-version-275488
	I0229 02:26:35.666509  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:38.738602  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:36.242536  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:26:36.242564  370051 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 02:26:36.242573  370051 cache.go:56] Caching tarball of preloaded images
	I0229 02:26:36.242641  370051 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 02:26:36.242651  370051 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 02:26:36.242742  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:26:36.242905  370051 start.go:365] acquiring machines lock for old-k8s-version-275488: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:26:44.818494  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:47.890482  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:53.970508  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:57.042448  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:03.122506  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:06.194415  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:12.274520  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:15.346558  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:21.426515  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:24.498557  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:30.578502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:33.650482  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:39.730548  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:42.802507  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:48.882487  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:51.954507  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:58.034498  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:01.106530  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:07.186513  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:10.258485  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:16.338519  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:19.410521  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:25.490436  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:28.562555  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:34.642534  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:37.714514  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:43.794519  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:46.866487  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:52.946514  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:56.018488  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:02.098512  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:05.170472  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:11.250485  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:14.322454  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:20.402450  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:23.474533  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:29.554541  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:32.626489  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:38.706558  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:41.778502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:47.858493  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:50.930489  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:57.010541  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:00.082537  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:06.162498  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:09.234502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:12.238620  369591 start.go:369] acquired machines lock for "no-preload-247751" in 4m33.303501223s
	I0229 02:30:12.238705  369591 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:12.238716  369591 fix.go:54] fixHost starting: 
	I0229 02:30:12.239171  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:12.239240  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:12.254984  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I0229 02:30:12.255490  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:12.255991  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:30:12.256012  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:12.256463  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:12.256668  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:12.256840  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:30:12.258341  369591 fix.go:102] recreateIfNeeded on no-preload-247751: state=Stopped err=<nil>
	I0229 02:30:12.258371  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	W0229 02:30:12.258522  369591 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:12.260176  369591 out.go:177] * Restarting existing kvm2 VM for "no-preload-247751" ...
	I0229 02:30:12.261521  369591 main.go:141] libmachine: (no-preload-247751) Calling .Start
	I0229 02:30:12.261678  369591 main.go:141] libmachine: (no-preload-247751) Ensuring networks are active...
	I0229 02:30:12.262375  369591 main.go:141] libmachine: (no-preload-247751) Ensuring network default is active
	I0229 02:30:12.262642  369591 main.go:141] libmachine: (no-preload-247751) Ensuring network mk-no-preload-247751 is active
	I0229 02:30:12.262962  369591 main.go:141] libmachine: (no-preload-247751) Getting domain xml...
	I0229 02:30:12.263526  369591 main.go:141] libmachine: (no-preload-247751) Creating domain...
	I0229 02:30:13.474816  369591 main.go:141] libmachine: (no-preload-247751) Waiting to get IP...
	I0229 02:30:13.475810  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:13.476251  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:13.476305  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:13.476230  370599 retry.go:31] will retry after 302.404435ms: waiting for machine to come up
	I0229 02:30:13.780776  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:13.781237  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:13.781265  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:13.781193  370599 retry.go:31] will retry after 364.673363ms: waiting for machine to come up
	I0229 02:30:12.236310  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:12.236352  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:30:12.238426  369508 machine.go:91] provisioned docker machine in 4m37.406828317s
	I0229 02:30:12.238513  369508 fix.go:56] fixHost completed within 4m37.429140371s
	I0229 02:30:12.238526  369508 start.go:83] releasing machines lock for "embed-certs-915633", held for 4m37.429164063s
	W0229 02:30:12.238553  369508 start.go:694] error starting host: provision: host is not running
	W0229 02:30:12.238763  369508 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0229 02:30:12.238784  369508 start.go:709] Will try again in 5 seconds ...
	I0229 02:30:14.148040  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:14.148530  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:14.148561  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:14.148471  370599 retry.go:31] will retry after 430.606986ms: waiting for machine to come up
	I0229 02:30:14.581180  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:14.581649  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:14.581679  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:14.581598  370599 retry.go:31] will retry after 557.726488ms: waiting for machine to come up
	I0229 02:30:15.141289  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:15.141736  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:15.141767  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:15.141675  370599 retry.go:31] will retry after 611.257074ms: waiting for machine to come up
	I0229 02:30:15.754464  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:15.754802  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:15.754831  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:15.754752  370599 retry.go:31] will retry after 905.484801ms: waiting for machine to come up
	I0229 02:30:16.661691  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:16.662072  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:16.662099  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:16.662020  370599 retry.go:31] will retry after 1.007584217s: waiting for machine to come up
	I0229 02:30:17.671565  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:17.672118  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:17.672159  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:17.672048  370599 retry.go:31] will retry after 933.310317ms: waiting for machine to come up
	I0229 02:30:18.607108  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:18.607473  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:18.607496  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:18.607426  370599 retry.go:31] will retry after 1.135856775s: waiting for machine to come up
	I0229 02:30:17.239210  369508 start.go:365] acquiring machines lock for embed-certs-915633: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:30:19.744656  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:19.745017  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:19.745047  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:19.744969  370599 retry.go:31] will retry after 2.184552748s: waiting for machine to come up
	I0229 02:30:21.932313  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:21.932764  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:21.932794  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:21.932711  370599 retry.go:31] will retry after 2.256573009s: waiting for machine to come up
	I0229 02:30:24.191551  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:24.191987  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:24.192016  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:24.191948  370599 retry.go:31] will retry after 3.0850751s: waiting for machine to come up
	I0229 02:30:27.278526  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:27.278941  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:27.278977  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:27.278914  370599 retry.go:31] will retry after 3.196492358s: waiting for machine to come up
	I0229 02:30:31.627482  369869 start.go:369] acquired machines lock for "default-k8s-diff-port-071485" in 4m6.129938439s
	I0229 02:30:31.627553  369869 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:31.627561  369869 fix.go:54] fixHost starting: 
	I0229 02:30:31.628005  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:31.628052  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:31.645217  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I0229 02:30:31.645607  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:31.646146  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:30:31.646179  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:31.646526  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:31.646754  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:31.646941  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:30:31.648372  369869 fix.go:102] recreateIfNeeded on default-k8s-diff-port-071485: state=Stopped err=<nil>
	I0229 02:30:31.648410  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	W0229 02:30:31.648603  369869 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:31.650778  369869 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-071485" ...
	I0229 02:30:30.479186  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.479664  369591 main.go:141] libmachine: (no-preload-247751) Found IP for machine: 192.168.72.114
	I0229 02:30:30.479694  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has current primary IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.479705  369591 main.go:141] libmachine: (no-preload-247751) Reserving static IP address...
	I0229 02:30:30.480161  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "no-preload-247751", mac: "52:54:00:fa:c1:ec", ip: "192.168.72.114"} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.480199  369591 main.go:141] libmachine: (no-preload-247751) DBG | skip adding static IP to network mk-no-preload-247751 - found existing host DHCP lease matching {name: "no-preload-247751", mac: "52:54:00:fa:c1:ec", ip: "192.168.72.114"}
	I0229 02:30:30.480213  369591 main.go:141] libmachine: (no-preload-247751) Reserved static IP address: 192.168.72.114
	I0229 02:30:30.480233  369591 main.go:141] libmachine: (no-preload-247751) Waiting for SSH to be available...
	I0229 02:30:30.480246  369591 main.go:141] libmachine: (no-preload-247751) DBG | Getting to WaitForSSH function...
	I0229 02:30:30.482557  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.482907  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.482935  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.483110  369591 main.go:141] libmachine: (no-preload-247751) DBG | Using SSH client type: external
	I0229 02:30:30.483136  369591 main.go:141] libmachine: (no-preload-247751) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa (-rw-------)
	I0229 02:30:30.483166  369591 main.go:141] libmachine: (no-preload-247751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:30:30.483180  369591 main.go:141] libmachine: (no-preload-247751) DBG | About to run SSH command:
	I0229 02:30:30.483197  369591 main.go:141] libmachine: (no-preload-247751) DBG | exit 0
	I0229 02:30:30.610329  369591 main.go:141] libmachine: (no-preload-247751) DBG | SSH cmd err, output: <nil>: 
	I0229 02:30:30.610691  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetConfigRaw
	I0229 02:30:30.611393  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:30.614007  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.614393  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.614426  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.614689  369591 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/config.json ...
	I0229 02:30:30.614872  369591 machine.go:88] provisioning docker machine ...
	I0229 02:30:30.614892  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:30.615096  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.615250  369591 buildroot.go:166] provisioning hostname "no-preload-247751"
	I0229 02:30:30.615272  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.615444  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.617525  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.617800  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.617835  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.617898  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.618095  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.618289  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.618424  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.618564  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:30.618790  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:30.618807  369591 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-247751 && echo "no-preload-247751" | sudo tee /etc/hostname
	I0229 02:30:30.740902  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-247751
	
	I0229 02:30:30.740952  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.743879  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.744353  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.744396  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.744584  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.744843  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.745014  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.745197  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.745351  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:30.745525  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:30.745543  369591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-247751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-247751/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-247751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:30:30.867175  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:30.867209  369591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:30:30.867229  369591 buildroot.go:174] setting up certificates
	I0229 02:30:30.867240  369591 provision.go:83] configureAuth start
	I0229 02:30:30.867248  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.867521  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:30.870143  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.870443  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.870464  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.870678  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.872992  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.873434  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.873463  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.873643  369591 provision.go:138] copyHostCerts
	I0229 02:30:30.873713  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:30:30.873740  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:30:30.873830  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:30:30.873937  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:30:30.873948  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:30:30.873992  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:30:30.874070  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:30:30.874080  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:30:30.874110  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:30:30.874240  369591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.no-preload-247751 san=[192.168.72.114 192.168.72.114 localhost 127.0.0.1 minikube no-preload-247751]
	I0229 02:30:30.921711  369591 provision.go:172] copyRemoteCerts
	I0229 02:30:30.921769  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:30:30.921793  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.924128  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.924436  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.924474  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.924628  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.924815  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.924975  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.925073  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.009229  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:30:31.035962  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:30:31.062947  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:30:31.089920  369591 provision.go:86] duration metric: configureAuth took 222.667724ms
	I0229 02:30:31.089947  369591 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:30:31.090145  369591 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:30:31.090256  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.092831  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.093148  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.093192  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.093338  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.093511  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.093699  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.093864  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.094032  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:31.094196  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:31.094211  369591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:30:31.381995  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:30:31.382023  369591 machine.go:91] provisioned docker machine in 767.136363ms
	I0229 02:30:31.382036  369591 start.go:300] post-start starting for "no-preload-247751" (driver="kvm2")
	I0229 02:30:31.382049  369591 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:30:31.382066  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.382560  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:30:31.382596  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.385219  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.385574  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.385602  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.385742  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.385955  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.386091  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.386254  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.469621  369591 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:30:31.474615  369591 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:30:31.474640  369591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:30:31.474702  369591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:30:31.474772  369591 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:30:31.474867  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:30:31.484964  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:31.512459  369591 start.go:303] post-start completed in 130.406384ms
	I0229 02:30:31.512519  369591 fix.go:56] fixHost completed within 19.27376704s
	I0229 02:30:31.512569  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.515169  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.515568  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.515596  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.515717  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.515944  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.516108  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.516260  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.516417  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:31.516592  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:31.516605  369591 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:30:31.627335  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173831.594794890
	
	I0229 02:30:31.627357  369591 fix.go:206] guest clock: 1709173831.594794890
	I0229 02:30:31.627366  369591 fix.go:219] Guest: 2024-02-29 02:30:31.59479489 +0000 UTC Remote: 2024-02-29 02:30:31.512545974 +0000 UTC m=+292.733991044 (delta=82.248916ms)
	I0229 02:30:31.627395  369591 fix.go:190] guest clock delta is within tolerance: 82.248916ms
	I0229 02:30:31.627403  369591 start.go:83] releasing machines lock for "no-preload-247751", held for 19.38873796s
	I0229 02:30:31.627429  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.627713  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:31.630486  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.630930  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.630959  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.631131  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631640  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631830  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631920  369591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:30:31.631983  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.632122  369591 ssh_runner.go:195] Run: cat /version.json
	I0229 02:30:31.632160  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.634658  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.634874  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635050  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.635079  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635348  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.635354  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.635379  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635478  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.635566  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.635633  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.635758  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.635768  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.635934  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.635940  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.719735  369591 ssh_runner.go:195] Run: systemctl --version
	I0229 02:30:31.739831  369591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:30:31.891138  369591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:30:31.899497  369591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:30:31.899569  369591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:30:31.921755  369591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:30:31.921785  369591 start.go:475] detecting cgroup driver to use...
	I0229 02:30:31.921896  369591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:30:31.938157  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:30:31.952761  369591 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:30:31.952834  369591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:30:31.966785  369591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:30:31.980931  369591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:30:32.091879  369591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:30:32.261190  369591 docker.go:233] disabling docker service ...
	I0229 02:30:32.261272  369591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:30:32.278862  369591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:30:32.295382  369591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:30:32.433426  369591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:30:32.557975  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:30:32.573791  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:30:32.595797  369591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:30:32.595848  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.608978  369591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:30:32.609042  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.621681  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.634251  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.647107  369591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:30:32.660478  369591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:30:32.672596  369591 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:30:32.672662  369591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:30:32.688480  369591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:30:32.700769  369591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:30:32.823703  369591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:30:33.004444  369591 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:30:33.004531  369591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:30:33.010801  369591 start.go:543] Will wait 60s for crictl version
	I0229 02:30:33.010862  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.015224  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:30:33.064627  369591 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:30:33.064721  369591 ssh_runner.go:195] Run: crio --version
	I0229 02:30:33.108265  369591 ssh_runner.go:195] Run: crio --version
	I0229 02:30:33.142639  369591 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 02:30:33.144169  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:33.147250  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:33.147609  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:33.147644  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:33.147836  369591 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 02:30:33.153138  369591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:33.169427  369591 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:30:33.169481  369591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:33.214079  369591 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 02:30:33.214113  369591 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:30:33.214193  369591 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:33.214216  369591 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.214252  369591 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.214276  369591 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.214335  369591 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.214323  369591 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.214354  369591 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0229 02:30:33.214241  369591 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.215862  369591 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.215880  369591 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0229 02:30:33.215862  369591 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.215928  369591 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.215947  369591 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:33.216082  369591 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.216136  369591 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.216252  369591 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.348095  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0229 02:30:33.434211  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.496911  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.499249  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.503235  369591 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0229 02:30:33.503274  369591 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.503307  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.507506  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.548265  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.551287  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.589427  369591 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0229 02:30:33.589474  369591 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.589523  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.590660  369591 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0229 02:30:33.590688  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.590708  369591 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.590763  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.636886  369591 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0229 02:30:33.636934  369591 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.637001  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.664221  369591 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0229 02:30:33.664266  369591 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.664316  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.691890  369591 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0229 02:30:33.691945  369591 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.691978  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.691993  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.692003  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.692096  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.692107  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.692104  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.692165  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.793616  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:33.793708  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.793723  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:33.793772  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:33.793839  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0229 02:30:33.793853  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:33.793856  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0229 02:30:33.793884  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0229 02:30:33.793902  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.793910  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:33.793914  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:33.793936  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:31.652037  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Start
	I0229 02:30:31.652202  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring networks are active...
	I0229 02:30:31.652984  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring network default is active
	I0229 02:30:31.653457  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring network mk-default-k8s-diff-port-071485 is active
	I0229 02:30:31.653909  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Getting domain xml...
	I0229 02:30:31.654724  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Creating domain...
	I0229 02:30:32.911561  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting to get IP...
	I0229 02:30:32.912505  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:32.912932  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:32.913032  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:32.912928  370716 retry.go:31] will retry after 285.213813ms: waiting for machine to come up
	I0229 02:30:33.199327  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.199733  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.199764  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.199678  370716 retry.go:31] will retry after 334.890426ms: waiting for machine to come up
	I0229 02:30:33.536492  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.536976  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.537006  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.536924  370716 retry.go:31] will retry after 344.946846ms: waiting for machine to come up
	I0229 02:30:33.883432  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.883911  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.883941  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.883858  370716 retry.go:31] will retry after 516.135135ms: waiting for machine to come up
	I0229 02:30:34.401167  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.401592  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.401621  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:34.401543  370716 retry.go:31] will retry after 538.013174ms: waiting for machine to come up
	I0229 02:30:34.941529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.942080  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.942116  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:34.942039  370716 retry.go:31] will retry after 883.013858ms: waiting for machine to come up
	I0229 02:30:33.850786  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0229 02:30:33.850868  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0229 02:30:33.850977  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:34.154343  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:36.987957  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (3.194013383s)
	I0229 02:30:36.987999  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0229 02:30:36.988100  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.194139784s)
	I0229 02:30:36.988127  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0229 02:30:36.988148  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.194207246s)
	I0229 02:30:36.988178  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0229 02:30:36.988156  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:36.988191  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.194323563s)
	I0229 02:30:36.988206  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0229 02:30:36.988236  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:36.988269  369591 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.833890629s)
	I0229 02:30:36.988240  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.13724749s)
	I0229 02:30:36.988310  369591 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0229 02:30:36.988331  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0229 02:30:36.988343  369591 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:36.988375  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:36.993483  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:38.351556  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.363290185s)
	I0229 02:30:38.351599  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0229 02:30:38.351633  369591 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:38.351632  369591 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.358113254s)
	I0229 02:30:38.351686  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0229 02:30:38.351705  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:38.351782  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:35.827402  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:35.827906  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:35.827932  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:35.827872  370716 retry.go:31] will retry after 902.653821ms: waiting for machine to come up
	I0229 02:30:36.732470  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:36.732925  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:36.732957  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:36.732863  370716 retry.go:31] will retry after 1.322376383s: waiting for machine to come up
	I0229 02:30:38.057306  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:38.057842  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:38.057874  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:38.057790  370716 retry.go:31] will retry after 1.16249498s: waiting for machine to come up
	I0229 02:30:39.221714  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:39.222197  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:39.222236  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:39.222156  370716 retry.go:31] will retry after 1.912383064s: waiting for machine to come up
	I0229 02:30:42.350149  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.998331984s)
	I0229 02:30:42.350198  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0229 02:30:42.350214  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.99848453s)
	I0229 02:30:42.350266  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0229 02:30:42.350305  369591 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:42.350357  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:41.135736  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:41.136113  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:41.136144  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:41.136058  370716 retry.go:31] will retry after 2.823296742s: waiting for machine to come up
	I0229 02:30:43.960885  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:43.961677  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:43.961703  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:43.961582  370716 retry.go:31] will retry after 3.266272258s: waiting for machine to come up
	I0229 02:30:44.528869  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.178478896s)
	I0229 02:30:44.528915  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0229 02:30:44.528947  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:44.529014  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:46.991074  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462030604s)
	I0229 02:30:46.991103  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0229 02:30:46.991129  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:46.991195  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:47.229005  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:47.229478  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:47.229511  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:47.229417  370716 retry.go:31] will retry after 3.429712893s: waiting for machine to come up
	I0229 02:30:51.887858  370051 start.go:369] acquired machines lock for "old-k8s-version-275488" in 4m15.644916266s
	I0229 02:30:51.887935  370051 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:51.887944  370051 fix.go:54] fixHost starting: 
	I0229 02:30:51.888374  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:51.888428  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:51.905851  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0229 02:30:51.906292  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:51.906778  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:30:51.906806  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:51.907250  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:51.907459  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:30:51.907631  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetState
	I0229 02:30:51.909061  370051 fix.go:102] recreateIfNeeded on old-k8s-version-275488: state=Stopped err=<nil>
	I0229 02:30:51.909093  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	W0229 02:30:51.909251  370051 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:51.911318  370051 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275488" ...
	I0229 02:30:50.662939  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.663341  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Found IP for machine: 192.168.61.233
	I0229 02:30:50.663366  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Reserving static IP address...
	I0229 02:30:50.663404  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has current primary IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.663745  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-071485", mac: "52:54:00:81:f9:08", ip: "192.168.61.233"} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.663781  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Reserved static IP address: 192.168.61.233
	I0229 02:30:50.663804  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | skip adding static IP to network mk-default-k8s-diff-port-071485 - found existing host DHCP lease matching {name: "default-k8s-diff-port-071485", mac: "52:54:00:81:f9:08", ip: "192.168.61.233"}
	I0229 02:30:50.663819  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for SSH to be available...
	I0229 02:30:50.663830  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Getting to WaitForSSH function...
	I0229 02:30:50.665924  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.666270  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.666306  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.666411  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Using SSH client type: external
	I0229 02:30:50.666435  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa (-rw-------)
	I0229 02:30:50.666464  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:30:50.666477  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | About to run SSH command:
	I0229 02:30:50.666489  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | exit 0
	I0229 02:30:50.794598  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | SSH cmd err, output: <nil>: 
	I0229 02:30:50.795011  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetConfigRaw
	I0229 02:30:50.795753  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:50.798443  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.798796  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.798822  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.799151  369869 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/config.json ...
	I0229 02:30:50.799410  369869 machine.go:88] provisioning docker machine ...
	I0229 02:30:50.799440  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:50.799684  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:50.799937  369869 buildroot.go:166] provisioning hostname "default-k8s-diff-port-071485"
	I0229 02:30:50.799963  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:50.800129  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:50.802457  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.802786  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.802813  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.802923  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:50.803087  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.803281  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.803393  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:50.803527  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:50.803744  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:50.803757  369869 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-071485 && echo "default-k8s-diff-port-071485" | sudo tee /etc/hostname
	I0229 02:30:50.930812  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-071485
	
	I0229 02:30:50.930849  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:50.933650  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.934017  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.934057  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.934217  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:50.934458  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.934651  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.934813  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:50.934964  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:50.935141  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:50.935159  369869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-071485' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-071485/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-071485' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:30:51.057233  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:51.057266  369869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:30:51.057307  369869 buildroot.go:174] setting up certificates
	I0229 02:30:51.057321  369869 provision.go:83] configureAuth start
	I0229 02:30:51.057335  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:51.057615  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:51.060233  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.060563  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.060595  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.060707  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.062583  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.062889  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.062938  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.063065  369869 provision.go:138] copyHostCerts
	I0229 02:30:51.063121  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:30:51.063140  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:30:51.063193  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:30:51.063290  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:30:51.063304  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:30:51.063332  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:30:51.063396  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:30:51.063403  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:30:51.063420  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:30:51.063482  369869 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-071485 san=[192.168.61.233 192.168.61.233 localhost 127.0.0.1 minikube default-k8s-diff-port-071485]
	I0229 02:30:51.180356  369869 provision.go:172] copyRemoteCerts
	I0229 02:30:51.180417  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:30:51.180446  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.182981  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.183262  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.183295  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.183465  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.183656  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.183814  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.183958  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.270548  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:30:51.297136  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 02:30:51.323133  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:30:51.349241  369869 provision.go:86] duration metric: configureAuth took 291.905825ms
	I0229 02:30:51.349269  369869 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:30:51.349453  369869 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:30:51.349529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.352119  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.352473  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.352503  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.352658  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.352839  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.353009  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.353122  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.353304  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:51.353480  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:51.353495  369869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:30:51.639987  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:30:51.640022  369869 machine.go:91] provisioned docker machine in 840.591751ms
	I0229 02:30:51.640041  369869 start.go:300] post-start starting for "default-k8s-diff-port-071485" (driver="kvm2")
	I0229 02:30:51.640057  369869 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:30:51.640087  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.640450  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:30:51.640486  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.643118  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.643427  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.643464  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.643661  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.643871  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.644025  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.644164  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.730150  369869 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:30:51.735109  369869 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:30:51.735135  369869 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:30:51.735207  369869 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:30:51.735298  369869 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:30:51.735416  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:30:51.745416  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:51.771727  369869 start.go:303] post-start completed in 131.66845ms
	I0229 02:30:51.771756  369869 fix.go:56] fixHost completed within 20.144195498s
	I0229 02:30:51.771782  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.774300  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.774582  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.774610  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.774744  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.774972  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.775153  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.775295  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.775481  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:51.775648  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:51.775659  369869 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:30:51.887656  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173851.865903243
	
	I0229 02:30:51.887683  369869 fix.go:206] guest clock: 1709173851.865903243
	I0229 02:30:51.887691  369869 fix.go:219] Guest: 2024-02-29 02:30:51.865903243 +0000 UTC Remote: 2024-02-29 02:30:51.771760886 +0000 UTC m=+266.432013426 (delta=94.142357ms)
	I0229 02:30:51.887738  369869 fix.go:190] guest clock delta is within tolerance: 94.142357ms
	I0229 02:30:51.887744  369869 start.go:83] releasing machines lock for "default-k8s-diff-port-071485", held for 20.260217484s
	I0229 02:30:51.887771  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.888047  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:51.890930  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.891264  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.891294  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.891491  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892002  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892209  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892299  369869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:30:51.892370  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.892472  369869 ssh_runner.go:195] Run: cat /version.json
	I0229 02:30:51.892503  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.895178  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895415  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895591  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.895626  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895769  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.895800  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895820  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.895966  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.896055  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.896141  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.896212  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.896277  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.896367  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.896447  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.976085  369869 ssh_runner.go:195] Run: systemctl --version
	I0229 02:30:52.001946  369869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:30:52.156753  369869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:30:52.164196  369869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:30:52.164302  369869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:30:52.189176  369869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:30:52.189201  369869 start.go:475] detecting cgroup driver to use...
	I0229 02:30:52.189281  369869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:30:52.207647  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:30:52.223752  369869 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:30:52.223842  369869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:30:52.246026  369869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:30:52.262180  369869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:30:52.409077  369869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:30:52.583777  369869 docker.go:233] disabling docker service ...
	I0229 02:30:52.583850  369869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:30:52.601434  369869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:30:52.617382  369869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:30:52.757258  369869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:30:52.898036  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:30:52.915787  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:30:52.939344  369869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:30:52.939417  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.951659  369869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:30:52.951722  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.963072  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.974800  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.986490  369869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:30:52.998630  369869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:30:53.009783  369869 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:30:53.009862  369869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:30:53.026356  369869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:30:53.038720  369869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:30:53.171220  369869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:30:53.326032  369869 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:30:53.326102  369869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:30:53.332369  369869 start.go:543] Will wait 60s for crictl version
	I0229 02:30:53.332431  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:30:53.336784  369869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:30:53.378780  369869 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:30:53.378902  369869 ssh_runner.go:195] Run: crio --version
	I0229 02:30:53.411158  369869 ssh_runner.go:195] Run: crio --version
	I0229 02:30:53.447038  369869 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 02:30:49.053324  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.062103665s)
	I0229 02:30:49.053353  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0229 02:30:49.053378  369591 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:49.053426  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:49.910791  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0229 02:30:49.910854  369591 cache_images.go:123] Successfully loaded all cached images
	I0229 02:30:49.910862  369591 cache_images.go:92] LoadImages completed in 16.696734078s
	I0229 02:30:49.910994  369591 ssh_runner.go:195] Run: crio config
	I0229 02:30:49.961413  369591 cni.go:84] Creating CNI manager for ""
	I0229 02:30:49.961435  369591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:30:49.961456  369591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:30:49.961509  369591 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.114 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-247751 NodeName:no-preload-247751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:30:49.961701  369591 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-247751"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:30:49.961801  369591 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-247751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:30:49.961866  369591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 02:30:49.973105  369591 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:30:49.973170  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:30:49.983178  369591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0229 02:30:50.001511  369591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 02:30:50.019574  369591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0229 02:30:50.037993  369591 ssh_runner.go:195] Run: grep 192.168.72.114	control-plane.minikube.internal$ /etc/hosts
	I0229 02:30:50.042075  369591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:50.054761  369591 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751 for IP: 192.168.72.114
	I0229 02:30:50.054796  369591 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:30:50.054976  369591 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:30:50.055031  369591 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:30:50.055146  369591 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/client.key
	I0229 02:30:50.055243  369591 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.key.9adeb8c5
	I0229 02:30:50.055310  369591 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.key
	I0229 02:30:50.055440  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:30:50.055481  369591 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:30:50.055502  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:30:50.055542  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:30:50.055577  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:30:50.055658  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:30:50.055728  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:50.056454  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:30:50.083764  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:30:50.110733  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:30:50.139180  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:30:50.167000  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:30:50.194044  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:30:50.220671  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:30:50.247561  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:30:50.274577  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:30:50.300997  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:30:50.327718  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:30:50.355463  369591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:30:50.374921  369591 ssh_runner.go:195] Run: openssl version
	I0229 02:30:50.381614  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:30:50.393546  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.398532  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.398594  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.404719  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:30:50.416507  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:30:50.428072  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.433031  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.433106  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.439174  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:30:50.450778  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:30:50.462238  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.467219  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.467269  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.473395  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:30:50.484817  369591 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:30:50.489643  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:30:50.496274  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:30:50.502579  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:30:50.508665  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:30:50.514827  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:30:50.520958  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:30:50.527032  369591 kubeadm.go:404] StartCluster: {Name:no-preload-247751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-247751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:30:50.527147  369591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:30:50.527194  369591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:30:50.565834  369591 cri.go:89] found id: ""
	I0229 02:30:50.565931  369591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:30:50.577305  369591 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:30:50.577354  369591 kubeadm.go:636] restartCluster start
	I0229 02:30:50.577408  369591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:30:50.587881  369591 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:50.588896  369591 kubeconfig.go:92] found "no-preload-247751" server: "https://192.168.72.114:8443"
	I0229 02:30:50.591223  369591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:30:50.601374  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:50.601434  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:50.613730  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:51.102422  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:51.102539  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:51.116483  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:51.601564  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:51.601657  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:51.615481  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:52.102039  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:52.102123  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:52.121300  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:52.601999  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:52.602093  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:52.618701  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.102291  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:53.102403  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:53.117898  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.602410  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:53.602496  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:53.618760  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.448437  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:53.451649  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:53.451998  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:53.452052  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:53.452302  369869 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 02:30:53.458709  369869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:53.477744  369869 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:30:53.477831  369869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:53.527511  369869 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 02:30:53.527593  369869 ssh_runner.go:195] Run: which lz4
	I0229 02:30:53.532370  369869 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:30:53.537149  369869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:30:53.537179  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 02:30:51.912520  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .Start
	I0229 02:30:51.912688  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring networks are active...
	I0229 02:30:51.913511  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network default is active
	I0229 02:30:51.913929  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network mk-old-k8s-version-275488 is active
	I0229 02:30:51.914378  370051 main.go:141] libmachine: (old-k8s-version-275488) Getting domain xml...
	I0229 02:30:51.915191  370051 main.go:141] libmachine: (old-k8s-version-275488) Creating domain...
	I0229 02:30:53.179261  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting to get IP...
	I0229 02:30:53.180359  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.180800  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.180922  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.180789  370858 retry.go:31] will retry after 282.360524ms: waiting for machine to come up
	I0229 02:30:53.465135  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.465708  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.465742  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.465651  370858 retry.go:31] will retry after 341.876004ms: waiting for machine to come up
	I0229 02:30:53.809322  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.809734  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.809876  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.809797  370858 retry.go:31] will retry after 356.208548ms: waiting for machine to come up
	I0229 02:30:54.167329  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.167824  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.167852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.167760  370858 retry.go:31] will retry after 395.76503ms: waiting for machine to come up
	I0229 02:30:54.565496  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.565976  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.566004  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.565933  370858 retry.go:31] will retry after 617.898012ms: waiting for machine to come up
	I0229 02:30:55.185679  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:55.186193  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:55.186237  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:55.186144  370858 retry.go:31] will retry after 911.947678ms: waiting for machine to come up
	I0229 02:30:56.099334  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:56.099788  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:56.099815  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:56.099726  370858 retry.go:31] will retry after 1.132066509s: waiting for machine to come up
	I0229 02:30:54.102304  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:54.102485  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:54.123193  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:54.601763  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:54.601890  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:54.621846  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.102417  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:55.102503  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:55.129010  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.601478  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:55.601532  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:55.620169  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:56.101701  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:56.101776  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:56.121369  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:56.601447  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:56.601550  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:56.617079  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.101509  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:57.101648  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:57.121691  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.601658  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:57.601754  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:57.620357  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:58.101829  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:58.101921  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:58.115818  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:58.602403  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:58.602509  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:58.621857  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.599398  369869 crio.go:444] Took 2.067052 seconds to copy over tarball
	I0229 02:30:55.599501  369869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:30:58.543850  369869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944309258s)
	I0229 02:30:58.543884  369869 crio.go:451] Took 2.944447 seconds to extract the tarball
	I0229 02:30:58.543896  369869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:30:58.592492  369869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:58.751479  369869 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:30:58.751509  369869 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:30:58.751576  369869 ssh_runner.go:195] Run: crio config
	I0229 02:30:58.813487  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:30:58.813515  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:30:58.813540  369869 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:30:58.813566  369869 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.233 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-071485 NodeName:default-k8s-diff-port-071485 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:30:58.813785  369869 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.233
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-071485"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:30:58.813898  369869 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-071485 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-071485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0229 02:30:58.813971  369869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:30:58.826199  369869 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:30:58.826324  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:30:58.837384  369869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0229 02:30:58.856023  369869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:30:58.876432  369869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0229 02:30:58.900684  369869 ssh_runner.go:195] Run: grep 192.168.61.233	control-plane.minikube.internal$ /etc/hosts
	I0229 02:30:58.905249  369869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:58.920007  369869 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485 for IP: 192.168.61.233
	I0229 02:30:58.920046  369869 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:30:58.920249  369869 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:30:58.920319  369869 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:30:58.920432  369869 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/client.key
	I0229 02:30:58.995037  369869 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.key.b3fc8ab0
	I0229 02:30:58.995173  369869 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.key
	I0229 02:30:58.995377  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:30:58.995430  369869 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:30:58.995451  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:30:58.995503  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:30:58.995543  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:30:58.995590  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:30:58.995653  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:58.996607  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:30:59.026487  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:30:59.054725  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:30:59.082553  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:30:59.110374  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:30:59.141972  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:30:59.170097  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:30:59.201206  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:30:59.232790  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:30:59.263940  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:30:59.292401  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:30:59.321920  369869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:30:59.343921  369869 ssh_runner.go:195] Run: openssl version
	I0229 02:30:59.351308  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:30:59.364059  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.369212  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.369302  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.375683  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:30:59.389046  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:30:59.404101  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.409433  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.409491  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.416126  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:30:59.429674  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:30:59.443405  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.448931  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.448991  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.455800  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:30:59.469013  369869 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:30:59.474745  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:30:59.481689  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:30:59.488868  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:30:59.496380  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:30:59.503593  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:30:59.510485  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:30:59.517770  369869 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-071485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-071485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.233 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:30:59.517894  369869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:30:59.517941  369869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:30:59.564631  369869 cri.go:89] found id: ""
	I0229 02:30:59.564718  369869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:30:59.578812  369869 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:30:59.578881  369869 kubeadm.go:636] restartCluster start
	I0229 02:30:59.578954  369869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:30:59.592900  369869 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:59.593909  369869 kubeconfig.go:92] found "default-k8s-diff-port-071485" server: "https://192.168.61.233:8444"
	I0229 02:30:59.596083  369869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:30:59.609384  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.609466  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.625617  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.110139  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.110282  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.127301  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.233610  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:57.234113  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:57.234145  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:57.234063  370858 retry.go:31] will retry after 1.238348525s: waiting for machine to come up
	I0229 02:30:58.474146  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:58.474696  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:58.474733  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:58.474642  370858 retry.go:31] will retry after 1.373712981s: waiting for machine to come up
	I0229 02:30:59.850075  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:59.850504  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:59.850526  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:59.850460  370858 retry.go:31] will retry after 2.156069813s: waiting for machine to come up
	I0229 02:30:59.101727  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.101812  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.120465  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:59.602060  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.602155  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.620588  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.102108  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.102203  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.120822  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.602443  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.602545  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.616796  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.616835  369591 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:00.616858  369591 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:00.616873  369591 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:00.616940  369591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:00.661747  369591 cri.go:89] found id: ""
	I0229 02:31:00.661869  369591 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:00.684098  369591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:00.696989  369591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:00.697059  369591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:00.708553  369591 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:00.708583  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:00.827929  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.578572  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.818119  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.892891  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.964926  369591 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:01.965037  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:02.466098  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:02.965290  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:03.465897  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:03.483060  369591 api_server.go:72] duration metric: took 1.518135432s to wait for apiserver process to appear ...
	I0229 02:31:03.483103  369591 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:03.483127  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:00.610179  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.610299  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.630460  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:01.109543  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:01.109680  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:01.129578  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:01.610203  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:01.610301  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:01.630078  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.109835  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:02.109945  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:02.127400  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.610160  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:02.610269  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:02.630581  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:03.109702  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:03.109836  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:03.129754  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:03.610303  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:03.610389  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:03.629702  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:04.110325  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:04.110459  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:04.128740  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:04.610305  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:04.610403  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:04.624716  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:05.110349  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:05.110457  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:05.130070  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.007911  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:02.008381  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:02.008409  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:02.008330  370858 retry.go:31] will retry after 1.864134048s: waiting for machine to come up
	I0229 02:31:03.873997  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:03.874606  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:03.874653  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:03.874547  370858 retry.go:31] will retry after 2.45659808s: waiting for machine to come up
	I0229 02:31:06.111554  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:06.111581  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:06.111596  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.191055  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:06.191090  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:06.483401  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.489220  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:06.489254  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:06.983921  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.988354  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:06.988430  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:07.483305  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:07.489830  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0229 02:31:07.497146  369591 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:31:07.497187  369591 api_server.go:131] duration metric: took 4.014075718s to wait for apiserver health ...
	I0229 02:31:07.497201  369591 cni.go:84] Creating CNI manager for ""
	I0229 02:31:07.497210  369591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:07.498785  369591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:07.500032  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:31:07.530625  369591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:31:07.594249  369591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:31:07.604940  369591 system_pods.go:59] 8 kube-system pods found
	I0229 02:31:07.604973  369591 system_pods.go:61] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:31:07.604980  369591 system_pods.go:61] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:31:07.604989  369591 system_pods.go:61] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:31:07.604995  369591 system_pods.go:61] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:31:07.605003  369591 system_pods.go:61] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:31:07.605015  369591 system_pods.go:61] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:31:07.605022  369591 system_pods.go:61] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:31:07.605032  369591 system_pods.go:61] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:31:07.605052  369591 system_pods.go:74] duration metric: took 10.776743ms to wait for pod list to return data ...
	I0229 02:31:07.605061  369591 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:31:07.608034  369591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:31:07.608059  369591 node_conditions.go:123] node cpu capacity is 2
	I0229 02:31:07.608073  369591 node_conditions.go:105] duration metric: took 3.004467ms to run NodePressure ...
	I0229 02:31:07.608096  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:07.975871  369591 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:31:07.980949  369591 kubeadm.go:787] kubelet initialised
	I0229 02:31:07.980970  369591 kubeadm.go:788] duration metric: took 5.071971ms waiting for restarted kubelet to initialise ...
	I0229 02:31:07.980979  369591 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:07.986764  369591 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:07.992673  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "coredns-76f75df574-2z5w8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.992698  369591 pod_ready.go:81] duration metric: took 5.911106ms waiting for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:07.992707  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "coredns-76f75df574-2z5w8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.992717  369591 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:07.997300  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "etcd-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.997322  369591 pod_ready.go:81] duration metric: took 4.594827ms waiting for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:07.997330  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "etcd-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.997335  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.004032  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-apiserver-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.004052  369591 pod_ready.go:81] duration metric: took 6.71117ms waiting for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.004060  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-apiserver-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.004066  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.009947  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.009985  369591 pod_ready.go:81] duration metric: took 5.909502ms waiting for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.010001  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.010009  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.398938  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-proxy-cdc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.398965  369591 pod_ready.go:81] duration metric: took 388.944943ms waiting for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.398975  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-proxy-cdc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.398982  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.797706  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-scheduler-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.797733  369591 pod_ready.go:81] duration metric: took 398.745142ms waiting for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.797744  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-scheduler-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.797751  369591 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:09.198467  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:09.198496  369591 pod_ready.go:81] duration metric: took 400.737315ms waiting for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:09.198506  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:09.198511  369591 pod_ready.go:38] duration metric: took 1.217523271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:09.198530  369591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:31:09.211194  369591 ops.go:34] apiserver oom_adj: -16
	I0229 02:31:09.211222  369591 kubeadm.go:640] restartCluster took 18.633858862s
	I0229 02:31:09.211232  369591 kubeadm.go:406] StartCluster complete in 18.684207766s
	I0229 02:31:09.211263  369591 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:09.211346  369591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:31:09.212899  369591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:09.213213  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:31:09.213318  369591 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:31:09.213406  369591 addons.go:69] Setting storage-provisioner=true in profile "no-preload-247751"
	I0229 02:31:09.213426  369591 addons.go:69] Setting default-storageclass=true in profile "no-preload-247751"
	I0229 02:31:09.213446  369591 addons.go:69] Setting metrics-server=true in profile "no-preload-247751"
	I0229 02:31:09.213463  369591 addons.go:234] Setting addon metrics-server=true in "no-preload-247751"
	I0229 02:31:09.213465  369591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-247751"
	I0229 02:31:09.213463  369591 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0229 02:31:09.213472  369591 addons.go:243] addon metrics-server should already be in state true
	I0229 02:31:09.213435  369591 addons.go:234] Setting addon storage-provisioner=true in "no-preload-247751"
	W0229 02:31:09.213515  369591 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:31:09.213519  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.213541  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.213915  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213924  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213944  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.213944  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.213943  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213978  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.218976  369591 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-247751" context rescaled to 1 replicas
	I0229 02:31:09.219015  369591 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:31:09.220657  369591 out.go:177] * Verifying Kubernetes components...
	I0229 02:31:09.221954  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:31:09.230064  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0229 02:31:09.230528  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.231030  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.231053  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.231526  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.231762  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.233032  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0229 02:31:09.233487  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.233929  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I0229 02:31:09.234003  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.234028  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.234293  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.234406  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.234784  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.234811  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.235009  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.235068  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.235163  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.235631  369591 addons.go:234] Setting addon default-storageclass=true in "no-preload-247751"
	W0229 02:31:09.235651  369591 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:31:09.235679  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.235738  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.235772  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.236123  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.236157  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.250756  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0229 02:31:09.251190  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.251830  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.251855  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.252228  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.252403  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.254210  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.256240  369591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:09.257522  369591 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:31:09.257537  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:31:09.257552  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.255418  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0229 02:31:09.255485  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0229 02:31:09.258003  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.258129  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.258432  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.258457  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.258664  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.258676  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.258822  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.258983  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.259278  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.259313  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.259533  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.261295  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.261320  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.262706  369591 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:31:05.610163  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:05.610319  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:05.627782  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:06.110424  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:06.110521  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:06.129628  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:06.610193  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:06.610330  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:06.624176  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:07.110249  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:07.110354  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:07.129955  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:07.609462  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:07.609536  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:07.623687  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:08.110263  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:08.110407  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:08.126900  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:08.610447  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:08.610520  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:08.625182  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.109675  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:09.109759  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:09.124637  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.610399  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:09.610520  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:09.630681  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.630715  369869 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:09.630757  369869 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:09.630777  369869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:09.630844  369869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:09.683876  369869 cri.go:89] found id: ""
	I0229 02:31:09.683963  369869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:09.706059  369869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:09.719868  369869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:09.719939  369869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:09.734591  369869 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:09.734622  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:09.862689  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:09.263808  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:31:09.263830  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:31:09.263849  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.261760  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.261947  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.263890  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.264339  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.264522  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.264704  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.266885  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.267339  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.267358  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.267533  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.267649  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.267782  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.267862  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.302813  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I0229 02:31:09.303329  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.303878  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.303909  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.304305  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.304509  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.306147  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.306434  369591 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:31:09.306454  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:31:09.306472  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.309029  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.309345  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.309382  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.309670  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.309872  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.310048  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.310193  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.402579  369591 node_ready.go:35] waiting up to 6m0s for node "no-preload-247751" to be "Ready" ...
	I0229 02:31:09.402756  369591 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:31:09.420259  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:31:09.426629  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:31:09.426655  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:31:09.446028  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:31:09.457219  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:31:09.457244  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:31:09.504028  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:31:09.504054  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:31:09.554137  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:31:10.485560  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.039492326s)
	I0229 02:31:10.485633  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.485646  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.485928  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065634917s)
	I0229 02:31:10.485970  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.485986  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.486053  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.486072  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.486092  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.486104  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.486112  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.486254  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.486287  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.486304  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.486320  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.487538  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.487556  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.487566  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.487543  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.487582  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.487579  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.494355  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.494374  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.494614  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.494635  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.494633  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.559201  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.005004802s)
	I0229 02:31:10.559258  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.559276  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.559592  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.559614  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.559625  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.559633  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.559899  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.559915  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.559926  369591 addons.go:470] Verifying addon metrics-server=true in "no-preload-247751"
	I0229 02:31:10.561833  369591 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:31:06.333259  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:06.333776  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:06.333811  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:06.333733  370858 retry.go:31] will retry after 3.223893936s: waiting for machine to come up
	I0229 02:31:09.559349  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:09.559937  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:09.559968  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:09.559891  370858 retry.go:31] will retry after 5.278822831s: waiting for machine to come up
	I0229 02:31:10.560171  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.563240  369591 addons.go:505] enable addons completed in 1.349905679s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:31:11.408006  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:10.805438  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.016546  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.132323  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.212201  369869 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:11.212309  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:11.713366  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.212866  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.713327  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.732027  369869 api_server.go:72] duration metric: took 1.519826457s to wait for apiserver process to appear ...
	I0229 02:31:12.732056  369869 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:12.732078  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.109299  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:15.109349  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:15.109368  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.166169  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:15.166209  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:15.232359  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.267052  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:15.267099  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.096073  369508 start.go:369] acquired machines lock for "embed-certs-915633" in 58.856797615s
	I0229 02:31:16.096132  369508 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:31:16.096144  369508 fix.go:54] fixHost starting: 
	I0229 02:31:16.096651  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:16.096692  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:16.115912  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0229 02:31:16.116419  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:16.116967  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:31:16.116999  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:16.117362  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:16.117562  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:16.117742  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:31:16.119589  369508 fix.go:102] recreateIfNeeded on embed-certs-915633: state=Stopped err=<nil>
	I0229 02:31:16.119614  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	W0229 02:31:16.119809  369508 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:31:16.121566  369508 out.go:177] * Restarting existing kvm2 VM for "embed-certs-915633" ...
	I0229 02:31:14.842498  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843049  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has current primary IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843083  370051 main.go:141] libmachine: (old-k8s-version-275488) Found IP for machine: 192.168.39.160
	I0229 02:31:14.843112  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserving static IP address...
	I0229 02:31:14.843485  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.843510  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | skip adding static IP to network mk-old-k8s-version-275488 - found existing host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"}
	I0229 02:31:14.843525  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserved static IP address: 192.168.39.160
	I0229 02:31:14.843535  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Getting to WaitForSSH function...
	I0229 02:31:14.843553  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting for SSH to be available...
	I0229 02:31:14.845739  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846087  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.846120  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846289  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH client type: external
	I0229 02:31:14.846319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa (-rw-------)
	I0229 02:31:14.846355  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:31:14.846372  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | About to run SSH command:
	I0229 02:31:14.846390  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | exit 0
	I0229 02:31:14.979384  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | SSH cmd err, output: <nil>: 
	I0229 02:31:14.979896  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetConfigRaw
	I0229 02:31:14.980716  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:14.983852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984278  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.984319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984639  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:31:14.984865  370051 machine.go:88] provisioning docker machine ...
	I0229 02:31:14.984890  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:14.985140  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985324  370051 buildroot.go:166] provisioning hostname "old-k8s-version-275488"
	I0229 02:31:14.985347  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985494  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:14.988036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988438  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.988464  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988633  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:14.988829  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989003  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989174  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:14.989361  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:14.989604  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:14.989621  370051 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275488 && echo "old-k8s-version-275488" | sudo tee /etc/hostname
	I0229 02:31:15.125564  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275488
	
	I0229 02:31:15.125605  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.128963  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129570  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.129652  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129735  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.129996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130185  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130380  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.130616  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.130872  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.130900  370051 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275488/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:15.272298  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:15.272337  370051 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:31:15.272368  370051 buildroot.go:174] setting up certificates
	I0229 02:31:15.272385  370051 provision.go:83] configureAuth start
	I0229 02:31:15.272402  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:15.272772  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:15.276382  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.276838  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.276869  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.277051  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.279927  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280298  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.280326  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280505  370051 provision.go:138] copyHostCerts
	I0229 02:31:15.280555  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:31:15.280566  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:31:15.280619  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:31:15.280749  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:31:15.280764  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:31:15.280789  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:31:15.280857  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:31:15.280871  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:31:15.280891  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:31:15.280954  370051 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275488 san=[192.168.39.160 192.168.39.160 localhost 127.0.0.1 minikube old-k8s-version-275488]
	I0229 02:31:15.360428  370051 provision.go:172] copyRemoteCerts
	I0229 02:31:15.360487  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:31:15.360512  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.363540  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.363931  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.363966  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.364154  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.364337  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.364495  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.364622  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.453643  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:31:15.483233  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 02:31:15.512164  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:31:15.543453  370051 provision.go:86] duration metric: configureAuth took 271.048547ms
	I0229 02:31:15.543484  370051 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:31:15.543705  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:31:15.543816  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.546472  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.546807  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.546835  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.547049  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.547272  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547455  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547662  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.547861  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.548035  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.548052  370051 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:31:15.835533  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:31:15.835572  370051 machine.go:91] provisioned docker machine in 850.691497ms
	I0229 02:31:15.835589  370051 start.go:300] post-start starting for "old-k8s-version-275488" (driver="kvm2")
	I0229 02:31:15.835604  370051 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:31:15.835635  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:15.835995  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:31:15.836025  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.838946  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839297  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.839330  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839460  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.839665  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.839839  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.840008  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.925849  370051 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:31:15.931227  370051 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:31:15.931260  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:31:15.931363  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:31:15.931465  370051 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:31:15.931574  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:31:15.942500  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:15.972803  370051 start.go:303] post-start completed in 137.19736ms
	I0229 02:31:15.972838  370051 fix.go:56] fixHost completed within 24.084893996s
	I0229 02:31:15.972873  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.975698  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976063  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.976093  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976279  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.976518  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976659  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976795  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.976959  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.977119  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.977130  370051 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:31:16.095892  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173876.041987567
	
	I0229 02:31:16.095917  370051 fix.go:206] guest clock: 1709173876.041987567
	I0229 02:31:16.095927  370051 fix.go:219] Guest: 2024-02-29 02:31:16.041987567 +0000 UTC Remote: 2024-02-29 02:31:15.972843681 +0000 UTC m=+279.886639354 (delta=69.143886ms)
	I0229 02:31:16.095954  370051 fix.go:190] guest clock delta is within tolerance: 69.143886ms
	I0229 02:31:16.095962  370051 start.go:83] releasing machines lock for "old-k8s-version-275488", held for 24.208056775s
	I0229 02:31:16.095996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.096336  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:16.099518  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100016  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.100060  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100189  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100751  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100955  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.101035  370051 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:31:16.101084  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.101167  370051 ssh_runner.go:195] Run: cat /version.json
	I0229 02:31:16.101190  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.104588  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.104638  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105000  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105059  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105101  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105335  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105546  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105590  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105821  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105832  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106002  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:16.106028  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106180  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.732828  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.739797  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:15.739827  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.232355  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:16.240421  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:16.240462  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.732451  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:16.740118  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 200:
	ok
	I0229 02:31:16.748529  369869 api_server.go:141] control plane version: v1.28.4
	I0229 02:31:16.748567  369869 api_server.go:131] duration metric: took 4.0165029s to wait for apiserver health ...
	I0229 02:31:16.748580  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:31:16.748588  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:16.750561  369869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:16.194120  370051 ssh_runner.go:195] Run: systemctl --version
	I0229 02:31:16.220808  370051 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:31:16.386082  370051 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:31:16.393419  370051 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:31:16.393512  370051 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:31:16.418966  370051 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:31:16.419003  370051 start.go:475] detecting cgroup driver to use...
	I0229 02:31:16.419087  370051 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:31:16.444372  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:31:16.466354  370051 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:31:16.466430  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:31:16.488710  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:31:16.509561  370051 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:31:16.651716  370051 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:31:16.840453  370051 docker.go:233] disabling docker service ...
	I0229 02:31:16.840538  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:31:16.869611  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:31:16.890123  370051 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:31:17.047701  370051 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:31:17.225457  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:31:17.248553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:31:17.275486  370051 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 02:31:17.275572  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.290350  370051 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:31:17.290437  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.304093  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.320562  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.339790  370051 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:31:17.356570  370051 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:31:17.371208  370051 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:31:17.371303  370051 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:31:17.390748  370051 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:31:17.405750  370051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:31:17.555023  370051 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:31:17.754419  370051 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:31:17.754508  370051 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:31:17.760190  370051 start.go:543] Will wait 60s for crictl version
	I0229 02:31:17.760245  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:17.765195  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:31:17.815839  370051 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:31:17.815953  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.857470  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.896796  370051 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 02:31:13.906892  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:15.907106  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:16.914513  369591 node_ready.go:49] node "no-preload-247751" has status "Ready":"True"
	I0229 02:31:16.914545  369591 node_ready.go:38] duration metric: took 7.511932085s waiting for node "no-preload-247751" to be "Ready" ...
	I0229 02:31:16.914560  369591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:16.925133  369591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.940518  369591 pod_ready.go:92] pod "coredns-76f75df574-2z5w8" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:16.940553  369591 pod_ready.go:81] duration metric: took 15.382701ms waiting for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.940568  369591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.122967  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Start
	I0229 02:31:16.123141  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring networks are active...
	I0229 02:31:16.124019  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring network default is active
	I0229 02:31:16.124630  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring network mk-embed-certs-915633 is active
	I0229 02:31:16.125118  369508 main.go:141] libmachine: (embed-certs-915633) Getting domain xml...
	I0229 02:31:16.126026  369508 main.go:141] libmachine: (embed-certs-915633) Creating domain...
	I0229 02:31:17.664537  369508 main.go:141] libmachine: (embed-certs-915633) Waiting to get IP...
	I0229 02:31:17.665883  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:17.666462  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:17.666595  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:17.666455  371066 retry.go:31] will retry after 193.172159ms: waiting for machine to come up
	I0229 02:31:17.861043  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:17.861754  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:17.861781  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:17.861651  371066 retry.go:31] will retry after 298.133474ms: waiting for machine to come up
	I0229 02:31:18.161304  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:18.161851  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:18.161886  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:18.161818  371066 retry.go:31] will retry after 402.680342ms: waiting for machine to come up
	I0229 02:31:18.566482  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:18.567145  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:18.567165  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:18.567068  371066 retry.go:31] will retry after 536.886613ms: waiting for machine to come up
	I0229 02:31:19.106090  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:19.106797  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:19.106823  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:19.106714  371066 retry.go:31] will retry after 583.032631ms: waiting for machine to come up
	I0229 02:31:19.691531  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:19.692096  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:19.692127  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:19.692000  371066 retry.go:31] will retry after 780.156818ms: waiting for machine to come up
	I0229 02:31:16.752375  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:31:16.783785  369869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:31:16.816646  369869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:31:16.829430  369869 system_pods.go:59] 8 kube-system pods found
	I0229 02:31:16.829480  369869 system_pods.go:61] "coredns-5dd5756b68-652db" [d989183e-dc0d-4913-8eab-fdfac0cf7ad7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:31:16.829491  369869 system_pods.go:61] "etcd-default-k8s-diff-port-071485" [aba29f47-cf0e-4ee5-8d18-7647b36369e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:31:16.829501  369869 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071485" [26a426b2-d5b7-456e-a733-3317009974ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:31:16.829517  369869 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071485" [a896f9fa-991f-44bb-bd97-02fac3494eea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:31:16.829528  369869 system_pods.go:61] "kube-proxy-g976s" [bc750be0-ae2b-4033-b65b-f1cccaebf32f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:31:16.829536  369869 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071485" [d99d25bf-25f4-4057-aedb-fc5ba797af47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:31:16.829544  369869 system_pods.go:61] "metrics-server-57f55c9bc5-86frx" [0ad81c0d-3f9a-45d8-93d8-bbb9e276d5b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:31:16.829560  369869 system_pods.go:61] "storage-provisioner" [92683c3e-04c1-4cef-988d-3b8beb7d4399] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:31:16.829570  369869 system_pods.go:74] duration metric: took 12.896339ms to wait for pod list to return data ...
	I0229 02:31:16.829584  369869 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:31:16.837494  369869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:31:16.837524  369869 node_conditions.go:123] node cpu capacity is 2
	I0229 02:31:16.837535  369869 node_conditions.go:105] duration metric: took 7.942051ms to run NodePressure ...
	I0229 02:31:16.837560  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:17.293873  369869 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:31:17.300874  369869 kubeadm.go:787] kubelet initialised
	I0229 02:31:17.300907  369869 kubeadm.go:788] duration metric: took 7.00259ms waiting for restarted kubelet to initialise ...
	I0229 02:31:17.300919  369869 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:17.315838  369869 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-652db" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.328228  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "coredns-5dd5756b68-652db" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.328265  369869 pod_ready.go:81] duration metric: took 12.396088ms waiting for pod "coredns-5dd5756b68-652db" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.328278  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "coredns-5dd5756b68-652db" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.328287  369869 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.335458  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.335487  369869 pod_ready.go:81] duration metric: took 7.145351ms waiting for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.335497  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.335505  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.356278  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.356365  369869 pod_ready.go:81] duration metric: took 20.849982ms waiting for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.356385  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.356396  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:19.376170  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:17.898162  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:17.901332  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.901809  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:17.901840  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.902046  370051 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:31:17.907256  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:17.924135  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:31:17.924218  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:17.986923  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:17.986992  370051 ssh_runner.go:195] Run: which lz4
	I0229 02:31:17.992110  370051 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:31:17.997252  370051 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:31:17.997287  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 02:31:20.124958  370051 crio.go:444] Took 2.132885 seconds to copy over tarball
	I0229 02:31:20.125075  370051 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:31:18.948383  369591 pod_ready.go:102] pod "etcd-no-preload-247751" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:20.950330  369591 pod_ready.go:92] pod "etcd-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:20.950359  369591 pod_ready.go:81] duration metric: took 4.009782336s waiting for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:20.950372  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.460878  369591 pod_ready.go:92] pod "kube-apiserver-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.460907  369591 pod_ready.go:81] duration metric: took 1.510525429s waiting for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.460922  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.468463  369591 pod_ready.go:92] pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.468487  369591 pod_ready.go:81] duration metric: took 7.556807ms waiting for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.468497  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.476459  369591 pod_ready.go:92] pod "kube-proxy-cdc4l" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.476488  369591 pod_ready.go:81] duration metric: took 7.983254ms waiting for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.476501  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.482564  369591 pod_ready.go:92] pod "kube-scheduler-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.482589  369591 pod_ready.go:81] duration metric: took 6.080532ms waiting for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.482598  369591 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:20.474186  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:20.474741  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:20.474784  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:20.474647  371066 retry.go:31] will retry after 845.550951ms: waiting for machine to come up
	I0229 02:31:21.322246  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:21.323007  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:21.323031  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:21.322935  371066 retry.go:31] will retry after 1.085864892s: waiting for machine to come up
	I0229 02:31:22.410244  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:22.410735  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:22.410766  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:22.410687  371066 retry.go:31] will retry after 1.587558593s: waiting for machine to come up
	I0229 02:31:24.000303  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:24.000914  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:24.000944  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:24.000828  371066 retry.go:31] will retry after 2.058374822s: waiting for machine to come up
	I0229 02:31:21.867552  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:23.972250  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:23.981829  369869 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:23.981860  369869 pod_ready.go:81] duration metric: took 6.625453699s waiting for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.981875  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g976s" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.994568  369869 pod_ready.go:92] pod "kube-proxy-g976s" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:23.994597  369869 pod_ready.go:81] duration metric: took 12.712769ms waiting for pod "kube-proxy-g976s" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.994609  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:24.002085  369869 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:24.002110  369869 pod_ready.go:81] duration metric: took 7.492788ms waiting for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:24.002133  369869 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.625489  370051 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.500380961s)
	I0229 02:31:23.625526  370051 crio.go:451] Took 3.500531 seconds to extract the tarball
	I0229 02:31:23.625536  370051 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:31:23.671458  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:23.714048  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:23.714087  370051 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:31:23.714189  370051 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.714213  370051 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.714309  370051 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 02:31:23.714424  370051 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.714269  370051 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.714461  370051 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.714519  370051 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.714192  370051 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.716086  370051 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.716076  370051 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716088  370051 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.716143  370051 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.716081  370051 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.716275  370051 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 02:31:23.838722  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.844569  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 02:31:23.853089  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.857738  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.864060  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.865519  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.926256  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.997349  370051 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 02:31:23.997401  370051 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.997463  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.010625  370051 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 02:31:24.010674  370051 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 02:31:24.010722  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083140  370051 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 02:31:24.083203  370051 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 02:31:24.083232  370051 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 02:31:24.083247  370051 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.083266  370051 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.083269  370051 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.083308  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083319  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083364  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083166  370051 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 02:31:24.083426  370051 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.083471  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123878  370051 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 02:31:24.123928  370051 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.123972  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123982  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:24.123973  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 02:31:24.124043  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.124051  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.124097  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.124153  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.152226  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.270585  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 02:31:24.305436  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 02:31:24.305532  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 02:31:24.305621  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 02:31:24.305629  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 02:31:24.305799  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 02:31:24.316950  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 02:31:24.635837  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:24.791670  370051 cache_images.go:92] LoadImages completed in 1.077558745s
	W0229 02:31:24.791798  370051 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0229 02:31:24.791902  370051 ssh_runner.go:195] Run: crio config
	I0229 02:31:24.851132  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:31:24.851164  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:24.851189  370051 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:31:24.851213  370051 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275488 NodeName:old-k8s-version-275488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 02:31:24.851423  370051 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-275488
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.160:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:31:24.851524  370051 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275488 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:31:24.851598  370051 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 02:31:24.864237  370051 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:31:24.864330  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:31:24.879552  370051 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 02:31:24.901027  370051 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:31:24.920638  370051 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 02:31:24.941894  370051 ssh_runner.go:195] Run: grep 192.168.39.160	control-plane.minikube.internal$ /etc/hosts
	I0229 02:31:24.947439  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:24.962396  370051 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488 for IP: 192.168.39.160
	I0229 02:31:24.962435  370051 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:24.962621  370051 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:31:24.962673  370051 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:31:24.962781  370051 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/client.key
	I0229 02:31:24.962851  370051 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key.80b25619
	I0229 02:31:24.962919  370051 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key
	I0229 02:31:24.963087  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:31:24.963126  370051 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:31:24.963138  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:31:24.963185  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:31:24.963213  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:31:24.963245  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:31:24.963296  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:24.963980  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:31:24.996049  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:31:25.030503  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:31:25.057695  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:31:25.091982  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:31:25.126636  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:31:25.156613  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:31:25.186480  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:31:25.221012  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:31:25.254122  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:31:25.282646  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:31:25.312624  370051 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:31:25.335020  370051 ssh_runner.go:195] Run: openssl version
	I0229 02:31:25.342920  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:31:25.355808  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361349  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361433  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.368335  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:31:25.380799  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:31:25.393069  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398466  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398539  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.404776  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:31:25.416735  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:31:25.428884  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434503  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434584  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.441187  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:31:25.453174  370051 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:31:25.458712  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:31:25.466032  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:31:25.473895  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:31:25.482948  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:31:25.491808  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:31:25.499003  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:31:25.506691  370051 kubeadm.go:404] StartCluster: {Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:25.506829  370051 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:31:25.506883  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:25.551867  370051 cri.go:89] found id: ""
	I0229 02:31:25.551970  370051 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:31:25.564446  370051 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:31:25.564476  370051 kubeadm.go:636] restartCluster start
	I0229 02:31:25.564545  370051 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:31:25.576275  370051 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:25.577406  370051 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-275488" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:31:25.578043  370051 kubeconfig.go:146] "old-k8s-version-275488" context is missing from /home/jenkins/minikube-integration/18063-316644/kubeconfig - will repair!
	I0229 02:31:25.578979  370051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:25.580805  370051 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:31:25.592154  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:25.592259  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:25.609268  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:26.092701  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.092827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.108636  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:24.491508  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.492827  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:28.496040  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.062093  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:26.062582  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:26.062612  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:26.062525  371066 retry.go:31] will retry after 2.231071357s: waiting for machine to come up
	I0229 02:31:28.295693  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:28.296180  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:28.296214  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:28.296116  371066 retry.go:31] will retry after 2.376277578s: waiting for machine to come up
	I0229 02:31:26.010834  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:28.031628  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.592320  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.592412  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.606907  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.092891  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.093028  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.112353  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.592956  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.593058  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.612315  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.092611  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.092729  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.108095  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.592592  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.592679  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.612145  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.092605  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.092720  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.113807  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.593002  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.593085  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.609337  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.092667  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.092757  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.112800  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.592328  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.592415  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.610909  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:31.092418  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.092529  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.109490  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.990551  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:32.990785  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:30.675432  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:30.675962  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:30.675995  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:30.675901  371066 retry.go:31] will retry after 4.442717853s: waiting for machine to come up
	I0229 02:31:30.511576  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:32.515611  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:35.010325  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:31.593046  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.593128  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.608148  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.092187  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.092299  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.107573  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.593184  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.593312  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.607993  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.092500  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.092603  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.107359  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.592987  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.593101  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.608041  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.092919  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.093023  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.107597  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.593200  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.593295  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.608100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.092589  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.092683  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.107100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.592815  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.592928  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.610879  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.610916  370051 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:35.610930  370051 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:35.610947  370051 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:35.611032  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:35.660059  370051 cri.go:89] found id: ""
	I0229 02:31:35.660146  370051 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:35.682067  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:35.694455  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:35.694542  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707118  370051 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707149  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:35.834811  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:35.123364  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.123906  369508 main.go:141] libmachine: (embed-certs-915633) Found IP for machine: 192.168.50.218
	I0229 02:31:35.123925  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has current primary IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.123931  369508 main.go:141] libmachine: (embed-certs-915633) Reserving static IP address...
	I0229 02:31:35.124398  369508 main.go:141] libmachine: (embed-certs-915633) Reserved static IP address: 192.168.50.218
	I0229 02:31:35.124423  369508 main.go:141] libmachine: (embed-certs-915633) Waiting for SSH to be available...
	I0229 02:31:35.124441  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "embed-certs-915633", mac: "52:54:00:26:ca:ce", ip: "192.168.50.218"} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.124468  369508 main.go:141] libmachine: (embed-certs-915633) DBG | skip adding static IP to network mk-embed-certs-915633 - found existing host DHCP lease matching {name: "embed-certs-915633", mac: "52:54:00:26:ca:ce", ip: "192.168.50.218"}
	I0229 02:31:35.124487  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Getting to WaitForSSH function...
	I0229 02:31:35.126676  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.127004  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.127035  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.127137  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Using SSH client type: external
	I0229 02:31:35.127168  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa (-rw-------)
	I0229 02:31:35.127199  369508 main.go:141] libmachine: (embed-certs-915633) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:31:35.127213  369508 main.go:141] libmachine: (embed-certs-915633) DBG | About to run SSH command:
	I0229 02:31:35.127224  369508 main.go:141] libmachine: (embed-certs-915633) DBG | exit 0
	I0229 02:31:35.251075  369508 main.go:141] libmachine: (embed-certs-915633) DBG | SSH cmd err, output: <nil>: 
	I0229 02:31:35.251474  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetConfigRaw
	I0229 02:31:35.252256  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:35.254934  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.255350  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.255378  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.255676  369508 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/config.json ...
	I0229 02:31:35.255881  369508 machine.go:88] provisioning docker machine ...
	I0229 02:31:35.255905  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:35.256154  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.256344  369508 buildroot.go:166] provisioning hostname "embed-certs-915633"
	I0229 02:31:35.256369  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.256506  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.258794  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.259163  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.259186  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.259337  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.259551  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.259716  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.259875  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.260066  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.260256  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.260269  369508 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-915633 && echo "embed-certs-915633" | sudo tee /etc/hostname
	I0229 02:31:35.383734  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-915633
	
	I0229 02:31:35.383770  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.386559  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.386913  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.386944  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.387121  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.387359  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.387631  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.387815  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.387979  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.388158  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.388175  369508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-915633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-915633/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-915633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:35.521449  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:35.521490  369508 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:31:35.521530  369508 buildroot.go:174] setting up certificates
	I0229 02:31:35.521544  369508 provision.go:83] configureAuth start
	I0229 02:31:35.521573  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.521923  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:35.524829  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.525193  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.525217  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.525348  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.527582  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.527980  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.528012  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.528164  369508 provision.go:138] copyHostCerts
	I0229 02:31:35.528216  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:31:35.528234  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:31:35.528290  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:31:35.528384  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:31:35.528396  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:31:35.528415  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:31:35.528514  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:31:35.528525  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:31:35.528544  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:31:35.528591  369508 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.embed-certs-915633 san=[192.168.50.218 192.168.50.218 localhost 127.0.0.1 minikube embed-certs-915633]
	I0229 02:31:35.778616  369508 provision.go:172] copyRemoteCerts
	I0229 02:31:35.778679  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:31:35.778706  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.782134  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.782605  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.782640  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.782833  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.783103  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.783305  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.783522  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:35.870506  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:31:35.904595  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:31:35.936515  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:31:35.966505  369508 provision.go:86] duration metric: configureAuth took 444.939951ms
	I0229 02:31:35.966539  369508 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:31:35.966725  369508 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:31:35.966831  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.969731  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.970133  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.970176  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.970402  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.970623  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.970788  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.970968  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.971139  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.971382  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.971401  369508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:31:36.262676  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:31:36.262719  369508 machine.go:91] provisioned docker machine in 1.00682197s
	I0229 02:31:36.262731  369508 start.go:300] post-start starting for "embed-certs-915633" (driver="kvm2")
	I0229 02:31:36.262743  369508 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:31:36.262765  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.263140  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:31:36.263179  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.265718  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.266095  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.266126  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.266278  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.266486  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.266658  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.266795  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.359474  369508 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:31:36.365071  369508 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:31:36.365110  369508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:31:36.365202  369508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:31:36.365279  369508 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:31:36.365395  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:31:36.376823  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:36.406525  369508 start.go:303] post-start completed in 143.75518ms
	I0229 02:31:36.406588  369508 fix.go:56] fixHost completed within 20.310442727s
	I0229 02:31:36.406619  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.409415  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.409840  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.409875  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.410009  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.410214  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.410412  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.410567  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.410715  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:36.410936  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:36.410950  369508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:31:36.520508  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173896.494400897
	
	I0229 02:31:36.520543  369508 fix.go:206] guest clock: 1709173896.494400897
	I0229 02:31:36.520555  369508 fix.go:219] Guest: 2024-02-29 02:31:36.494400897 +0000 UTC Remote: 2024-02-29 02:31:36.406594326 +0000 UTC m=+361.755087901 (delta=87.806571ms)
	I0229 02:31:36.520584  369508 fix.go:190] guest clock delta is within tolerance: 87.806571ms
	I0229 02:31:36.520597  369508 start.go:83] releasing machines lock for "embed-certs-915633", held for 20.424490067s
	I0229 02:31:36.520629  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.520949  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:36.523819  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.524146  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.524185  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.524359  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.524912  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.525109  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.525206  369508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:31:36.525251  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.525332  369508 ssh_runner.go:195] Run: cat /version.json
	I0229 02:31:36.525360  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.528265  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528470  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528614  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.528641  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528826  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.528829  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.528855  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.529047  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.529135  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.529253  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.529321  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.529414  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.529478  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.529556  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.611757  369508 ssh_runner.go:195] Run: systemctl --version
	I0229 02:31:36.638875  369508 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:31:36.786219  369508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:31:36.798964  369508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:31:36.799056  369508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:31:36.817942  369508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:31:36.817975  369508 start.go:475] detecting cgroup driver to use...
	I0229 02:31:36.818086  369508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:31:36.837019  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:31:36.855078  369508 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:31:36.855159  369508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:31:36.873444  369508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:31:36.891708  369508 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:31:37.031928  369508 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:31:37.212859  369508 docker.go:233] disabling docker service ...
	I0229 02:31:37.212960  369508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:31:37.235232  369508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:31:37.253901  369508 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:31:37.401366  369508 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:31:37.530791  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:31:37.547864  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:31:37.570344  369508 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:31:37.570416  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.582275  369508 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:31:37.582345  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.593628  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.605168  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.616567  369508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:31:37.628153  369508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:31:37.638579  369508 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:31:37.638640  369508 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:31:37.652738  369508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:31:37.664118  369508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:31:37.785330  369508 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:31:37.933006  369508 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:31:37.933095  369508 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:31:37.938625  369508 start.go:543] Will wait 60s for crictl version
	I0229 02:31:37.938702  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:31:37.943285  369508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:31:37.984992  369508 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:31:37.985105  369508 ssh_runner.go:195] Run: crio --version
	I0229 02:31:38.018467  369508 ssh_runner.go:195] Run: crio --version
	I0229 02:31:38.051472  369508 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 02:31:34.991345  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:36.991987  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:38.052850  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:38.055688  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:38.055970  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:38.056006  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:38.056253  369508 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 02:31:38.060925  369508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:38.076126  369508 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:31:38.076197  369508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:38.116261  369508 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 02:31:38.116372  369508 ssh_runner.go:195] Run: which lz4
	I0229 02:31:38.121080  369508 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:31:38.125711  369508 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:31:38.125755  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 02:31:37.012008  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:39.018348  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:36.790885  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.042778  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.130251  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.215289  370051 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:37.215384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:37.715589  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.715938  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.215781  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.716505  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.216238  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.716182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.992988  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:41.491712  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:43.492458  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:40.139859  369508 crio.go:444] Took 2.018817 seconds to copy over tarball
	I0229 02:31:40.139953  369508 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:31:43.071745  369508 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.931752333s)
	I0229 02:31:43.071797  369508 crio.go:451] Took 2.931905 seconds to extract the tarball
	I0229 02:31:43.071809  369508 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:31:43.118127  369508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:43.171147  369508 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:31:43.171176  369508 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:31:43.171262  369508 ssh_runner.go:195] Run: crio config
	I0229 02:31:43.232177  369508 cni.go:84] Creating CNI manager for ""
	I0229 02:31:43.232203  369508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:43.232229  369508 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:31:43.232247  369508 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-915633 NodeName:embed-certs-915633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:31:43.232419  369508 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-915633"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:31:43.232519  369508 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-915633 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-915633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:31:43.232596  369508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:31:43.244392  369508 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:31:43.244467  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:31:43.256293  369508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0229 02:31:43.275397  369508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:31:43.295494  369508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0229 02:31:43.316812  369508 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I0229 02:31:43.321496  369508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:43.335055  369508 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633 for IP: 192.168.50.218
	I0229 02:31:43.335092  369508 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:43.335270  369508 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:31:43.335316  369508 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:31:43.335388  369508 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/client.key
	I0229 02:31:43.335442  369508 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.key.cc0da009
	I0229 02:31:43.335475  369508 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.key
	I0229 02:31:43.335584  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:31:43.335610  369508 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:31:43.335619  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:31:43.335642  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:31:43.335673  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:31:43.335710  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:31:43.335779  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:43.336455  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:31:43.364985  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:31:43.394189  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:31:43.424515  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:31:43.456589  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:31:43.486396  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:31:43.516931  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:31:43.546421  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:31:43.578923  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:31:43.608333  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:31:43.637196  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:31:43.667522  369508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:31:43.688266  369508 ssh_runner.go:195] Run: openssl version
	I0229 02:31:43.695616  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:31:43.709892  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.715346  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.715426  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.722688  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:31:43.735866  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:31:43.749967  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.757599  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.757671  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.765157  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:31:43.779671  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:31:43.792900  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.798505  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.798576  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.805192  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:31:43.818233  369508 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:31:43.823681  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:31:43.831016  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:31:43.837899  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:31:43.844802  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:31:43.851881  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:31:43.858689  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:31:43.865749  369508 kubeadm.go:404] StartCluster: {Name:embed-certs-915633 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-915633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:43.865852  369508 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:31:43.865925  369508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:43.906012  369508 cri.go:89] found id: ""
	I0229 02:31:43.906116  369508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:31:43.918241  369508 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:31:43.918265  369508 kubeadm.go:636] restartCluster start
	I0229 02:31:43.918349  369508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:31:43.930524  369508 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:43.931550  369508 kubeconfig.go:92] found "embed-certs-915633" server: "https://192.168.50.218:8443"
	I0229 02:31:43.933612  369508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:31:43.944469  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:43.944519  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:43.958194  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:44.444746  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:44.444840  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:44.458567  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:41.510364  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:43.511424  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:41.216236  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:41.716082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.215537  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.715524  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.215873  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.715634  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.216464  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.715519  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.216430  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.716196  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.990995  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:48.489390  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:44.944934  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.003707  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.018797  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:45.445348  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.445435  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.460199  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:45.944750  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.944879  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.959309  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.445218  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:46.445313  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:46.459195  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.945456  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:46.945538  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:46.959212  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:47.444711  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:47.444819  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:47.459189  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:47.944651  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:47.944726  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:47.958733  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:48.445008  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:48.445100  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:48.460126  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:48.944649  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:48.944731  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:48.959993  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:49.444545  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:49.444628  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:49.458889  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.011657  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:48.508465  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:46.215715  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:46.715657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.216495  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.715491  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.215459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.715556  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.215675  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.716046  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.215993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.715594  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.489578  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:52.990638  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:49.945108  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:49.945265  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:49.960625  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:50.444843  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:50.444923  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:50.459329  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:50.944871  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:50.944963  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:50.959583  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:51.444601  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:51.444704  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:51.462037  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:51.944573  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:51.944658  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:51.958538  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:52.445111  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:52.445269  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:52.462902  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:52.945088  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:52.945182  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:52.960241  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.444649  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:53.444738  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:53.458642  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.945214  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:53.945291  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:53.960552  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.960588  369508 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:53.960600  369508 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:53.960615  369508 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:53.960671  369508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:54.005230  369508 cri.go:89] found id: ""
	I0229 02:31:54.005321  369508 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:54.027544  369508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:54.040517  369508 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:54.040577  369508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:54.051200  369508 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:54.051223  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:54.168817  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:50.509119  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:52.509526  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:54.511540  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:51.215927  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:51.715888  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.215659  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.715769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.216175  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.715755  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.216468  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.216280  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.715924  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.992721  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:57.490570  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:55.091652  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.346578  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.443373  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.542444  369508 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:55.542562  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.042870  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.542972  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.571776  369508 api_server.go:72] duration metric: took 1.029332492s to wait for apiserver process to appear ...
	I0229 02:31:56.571808  369508 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:56.571831  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:56.572606  369508 api_server.go:269] stopped: https://192.168.50.218:8443/healthz: Get "https://192.168.50.218:8443/healthz": dial tcp 192.168.50.218:8443: connect: connection refused
	I0229 02:31:57.072145  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.557011  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:59.557048  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:59.557066  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.609944  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:59.610010  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:59.610028  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.669911  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:59.669955  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:57.010655  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:59.510097  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:00.071971  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:00.084661  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:32:00.084690  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:32:00.572262  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:00.577772  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:32:00.577807  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:32:01.072371  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:01.077306  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0229 02:32:01.084492  369508 api_server.go:141] control plane version: v1.28.4
	I0229 02:32:01.084531  369508 api_server.go:131] duration metric: took 4.512702749s to wait for apiserver health ...
	I0229 02:32:01.084544  369508 cni.go:84] Creating CNI manager for ""
	I0229 02:32:01.084554  369508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:32:01.086337  369508 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:56.215653  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.715898  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.215954  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.216366  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.716093  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.215944  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.715553  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.216341  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.715677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.087584  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:32:01.099724  369508 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:32:01.122381  369508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:32:01.133632  369508 system_pods.go:59] 8 kube-system pods found
	I0229 02:32:01.133674  369508 system_pods.go:61] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:32:01.133684  369508 system_pods.go:61] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:32:01.133697  369508 system_pods.go:61] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:32:01.133710  369508 system_pods.go:61] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:32:01.133720  369508 system_pods.go:61] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:32:01.133728  369508 system_pods.go:61] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:32:01.133738  369508 system_pods.go:61] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:32:01.133746  369508 system_pods.go:61] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:32:01.133755  369508 system_pods.go:74] duration metric: took 11.346225ms to wait for pod list to return data ...
	I0229 02:32:01.133767  369508 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:32:01.138716  369508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:32:01.138746  369508 node_conditions.go:123] node cpu capacity is 2
	I0229 02:32:01.138760  369508 node_conditions.go:105] duration metric: took 4.985648ms to run NodePressure ...
	I0229 02:32:01.138783  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:32:01.368503  369508 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:32:01.373648  369508 kubeadm.go:787] kubelet initialised
	I0229 02:32:01.373669  369508 kubeadm.go:788] duration metric: took 5.137378ms waiting for restarted kubelet to initialise ...
	I0229 02:32:01.373677  369508 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:01.379649  369508 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.384724  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.384750  369508 pod_ready.go:81] duration metric: took 5.071017ms waiting for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.384758  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.384765  369508 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.390019  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "etcd-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.390048  369508 pod_ready.go:81] duration metric: took 5.27491ms waiting for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.390059  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "etcd-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.390067  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.396275  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.396294  369508 pod_ready.go:81] duration metric: took 6.218856ms waiting for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.396302  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.396307  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.525881  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.525914  369508 pod_ready.go:81] duration metric: took 129.596783ms waiting for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.525927  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.525935  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.926806  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-proxy-6tt7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.926843  369508 pod_ready.go:81] duration metric: took 400.889304ms waiting for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.926856  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-proxy-6tt7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.926864  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:02.326588  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.326621  369508 pod_ready.go:81] duration metric: took 399.74816ms waiting for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:02.326633  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.326639  369508 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:02.727730  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.727759  369508 pod_ready.go:81] duration metric: took 401.108694ms waiting for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:02.727769  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.727776  369508 pod_ready.go:38] duration metric: took 1.354090438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:02.727795  369508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:32:02.742069  369508 ops.go:34] apiserver oom_adj: -16
	I0229 02:32:02.742097  369508 kubeadm.go:640] restartCluster took 18.823823408s
	I0229 02:32:02.742107  369508 kubeadm.go:406] StartCluster complete in 18.876382148s
	I0229 02:32:02.742127  369508 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:02.742271  369508 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:32:02.744032  369508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:02.744292  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:32:02.744429  369508 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:32:02.744507  369508 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-915633"
	I0229 02:32:02.744526  369508 addons.go:69] Setting default-storageclass=true in profile "embed-certs-915633"
	I0229 02:32:02.744540  369508 addons.go:69] Setting metrics-server=true in profile "embed-certs-915633"
	I0229 02:32:02.744550  369508 addons.go:234] Setting addon metrics-server=true in "embed-certs-915633"
	I0229 02:32:02.744555  369508 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-915633"
	W0229 02:32:02.744558  369508 addons.go:243] addon metrics-server should already be in state true
	I0229 02:32:02.744619  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.744532  369508 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-915633"
	W0229 02:32:02.744735  369508 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:32:02.744853  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.744682  369508 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:32:02.745085  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745113  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.745121  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745175  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.745339  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745416  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.749865  369508 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-915633" context rescaled to 1 replicas
	I0229 02:32:02.749907  369508 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:32:02.751823  369508 out.go:177] * Verifying Kubernetes components...
	I0229 02:32:02.753296  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:32:02.762688  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0229 02:32:02.763050  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0229 02:32:02.763274  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.763693  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.763872  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.763895  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.763963  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0229 02:32:02.764307  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.764337  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.764554  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.764592  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.764665  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.765103  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.765135  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.765144  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.765170  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.765481  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.765495  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.765863  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.766129  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.769253  369508 addons.go:234] Setting addon default-storageclass=true in "embed-certs-915633"
	W0229 02:32:02.769274  369508 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:32:02.769295  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.769578  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.769607  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.787345  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35577
	I0229 02:32:02.787806  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.788243  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.788266  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.789755  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33629
	I0229 02:32:02.790272  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.790361  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0229 02:32:02.790634  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.790727  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.791027  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.791192  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.791206  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.791367  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.791402  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.791705  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.791924  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.792315  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.792987  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.793026  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.793278  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.795128  369508 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:32:02.794105  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.796451  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:32:02.796472  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:32:02.796496  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.797812  369508 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:59.493919  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:01.989683  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:02.799249  369508 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:32:02.799270  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:32:02.799289  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.800109  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.800960  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.801015  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.801300  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.801496  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.801635  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.801763  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.802278  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.802617  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.802645  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.802836  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.803026  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.803174  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.803390  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.818656  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I0229 02:32:02.819105  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.819606  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.819625  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.820022  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.820366  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.822054  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.822412  369508 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:32:02.822432  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:32:02.822451  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.825579  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.826260  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.826293  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.826463  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.826614  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.826761  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.826954  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.911316  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:32:02.945655  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:32:02.945683  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:32:02.981318  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:32:02.981352  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:32:02.983632  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:32:03.009561  369508 node_ready.go:35] waiting up to 6m0s for node "embed-certs-915633" to be "Ready" ...
	I0229 02:32:03.009586  369508 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:32:03.044265  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:32:03.044293  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:32:03.094073  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:32:04.287008  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.3033415s)
	I0229 02:32:04.287081  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287094  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287375  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.37602435s)
	I0229 02:32:04.287416  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287428  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287440  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287463  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287478  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287487  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287750  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287800  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287828  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287861  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287805  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287914  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287834  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.287774  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.289370  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.289377  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.289397  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.293892  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.293919  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.294180  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.294198  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.294212  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.376595  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.28244915s)
	I0229 02:32:04.376679  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.376710  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.377004  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.377022  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.377031  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.377039  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.377037  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.377275  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.377319  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.377331  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.377348  369508 addons.go:470] Verifying addon metrics-server=true in "embed-certs-915633"
	I0229 02:32:04.380194  369508 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:32:04.381510  369508 addons.go:505] enable addons completed in 1.637082823s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:32:02.010578  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:04.509975  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:01.216197  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.716302  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.216170  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.715615  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.216580  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.716088  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.215743  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.716142  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.216543  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.715853  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.991440  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:05.992389  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:08.491225  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:05.014879  369508 node_ready.go:58] node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:07.518854  369508 node_ready.go:58] node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:07.009085  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:09.009296  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:06.216206  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:06.715748  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.215964  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.716419  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.216034  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.715611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.216207  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.716408  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.216144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.716454  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.491751  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:12.991326  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:10.013574  369508 node_ready.go:49] node "embed-certs-915633" has status "Ready":"True"
	I0229 02:32:10.013605  369508 node_ready.go:38] duration metric: took 7.004009102s waiting for node "embed-certs-915633" to be "Ready" ...
	I0229 02:32:10.013617  369508 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:10.020332  369508 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.025740  369508 pod_ready.go:92] pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:10.025766  369508 pod_ready.go:81] duration metric: took 5.403764ms waiting for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.025778  369508 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.534182  369508 pod_ready.go:92] pod "etcd-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:10.534212  369508 pod_ready.go:81] duration metric: took 508.426559ms waiting for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.534238  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:11.048997  369508 pod_ready.go:92] pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:11.049027  369508 pod_ready.go:81] duration metric: took 514.780048ms waiting for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:11.049040  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:13.056477  369508 pod_ready.go:102] pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:11.010305  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:13.011477  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:11.215611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:11.716198  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.216332  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.716413  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.216407  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.716466  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.216182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.716285  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.215995  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.715613  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.491511  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:17.494485  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:15.056064  369508 pod_ready.go:92] pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.056093  369508 pod_ready.go:81] duration metric: took 4.007044542s waiting for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.056104  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.061418  369508 pod_ready.go:92] pod "kube-proxy-6tt7l" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.061440  369508 pod_ready.go:81] duration metric: took 5.329971ms waiting for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.061451  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.578305  369508 pod_ready.go:92] pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.578332  369508 pod_ready.go:81] duration metric: took 516.873281ms waiting for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.578341  369508 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:17.585624  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:19.586470  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:15.510630  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:18.010381  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:16.215530  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:16.716420  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.216031  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.716303  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.216082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.715523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.216166  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.716503  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.215680  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.715770  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.989766  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.989821  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.586820  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:23.587119  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:20.509895  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:23.010371  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.215523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:21.715617  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.216133  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.716029  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.216141  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.715578  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.215640  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.715601  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.215959  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.716394  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.990493  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:25.990911  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:28.489681  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:26.085933  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:28.086754  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:25.508765  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:27.508956  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:29.512409  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:26.215946  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:26.715834  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.216243  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.715581  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.215521  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.715849  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.716497  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.215657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.715492  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.490400  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:32.990250  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:30.586107  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:33.086852  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:31.518170  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:34.009514  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:31.216322  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:31.716160  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.215557  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.715618  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.215761  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.716216  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.216460  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.716244  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.215551  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.715633  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.990305  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.990956  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:35.585472  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:37.586652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.509112  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:38.509634  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.215910  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:36.716307  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:37.216308  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:37.216404  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:37.262324  370051 cri.go:89] found id: ""
	I0229 02:32:37.262358  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.262370  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:37.262378  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:37.262442  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:37.303758  370051 cri.go:89] found id: ""
	I0229 02:32:37.303790  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.303802  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:37.303809  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:37.303880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:37.349512  370051 cri.go:89] found id: ""
	I0229 02:32:37.349538  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.349546  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:37.349553  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:37.349607  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:37.389630  370051 cri.go:89] found id: ""
	I0229 02:32:37.389657  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.389668  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:37.389676  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:37.389752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:37.435918  370051 cri.go:89] found id: ""
	I0229 02:32:37.435954  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.435967  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:37.435976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:37.436044  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:37.479336  370051 cri.go:89] found id: ""
	I0229 02:32:37.479369  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.479377  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:37.479384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:37.479460  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:37.519944  370051 cri.go:89] found id: ""
	I0229 02:32:37.519979  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.519991  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:37.519999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:37.520071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:37.563848  370051 cri.go:89] found id: ""
	I0229 02:32:37.563875  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.563884  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:37.563895  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:37.563915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:37.607989  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:37.608025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:37.660272  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:37.660324  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:37.676878  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:37.676909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:37.805099  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:37.805132  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:37.805159  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.378467  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:40.393066  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:40.393221  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:40.432592  370051 cri.go:89] found id: ""
	I0229 02:32:40.432619  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.432628  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:40.432634  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:40.432693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:40.473651  370051 cri.go:89] found id: ""
	I0229 02:32:40.473706  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.473716  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:40.473722  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:40.473781  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:40.520262  370051 cri.go:89] found id: ""
	I0229 02:32:40.520292  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.520303  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:40.520312  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:40.520374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:40.560359  370051 cri.go:89] found id: ""
	I0229 02:32:40.560393  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.560402  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:40.560408  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:40.560474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:40.602145  370051 cri.go:89] found id: ""
	I0229 02:32:40.602173  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.602181  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:40.602187  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:40.602266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:40.640744  370051 cri.go:89] found id: ""
	I0229 02:32:40.640778  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.640791  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:40.640799  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:40.640869  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:40.681863  370051 cri.go:89] found id: ""
	I0229 02:32:40.681895  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.681908  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:40.681916  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:40.681985  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:40.725859  370051 cri.go:89] found id: ""
	I0229 02:32:40.725890  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.725899  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:40.725910  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:40.725924  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.794666  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:40.794705  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:40.854173  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:40.854215  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:40.901744  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:40.901786  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:40.925331  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:40.925371  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:41.005785  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:39.491292  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:41.494077  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:40.086540  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:42.584644  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:44.587012  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:41.010764  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:43.510128  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:43.506756  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:43.522038  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:43.522135  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:43.559609  370051 cri.go:89] found id: ""
	I0229 02:32:43.559635  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.559642  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:43.559649  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:43.559707  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:43.609059  370051 cri.go:89] found id: ""
	I0229 02:32:43.609087  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.609096  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:43.609102  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:43.609159  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:43.648988  370051 cri.go:89] found id: ""
	I0229 02:32:43.649018  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.649029  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:43.649037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:43.649104  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:43.690995  370051 cri.go:89] found id: ""
	I0229 02:32:43.691028  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.691042  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:43.691054  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:43.691120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:43.729221  370051 cri.go:89] found id: ""
	I0229 02:32:43.729249  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.729257  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:43.729263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:43.729334  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:43.767141  370051 cri.go:89] found id: ""
	I0229 02:32:43.767174  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.767186  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:43.767194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:43.767266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:43.807926  370051 cri.go:89] found id: ""
	I0229 02:32:43.807962  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.807970  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:43.807976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:43.808029  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:43.857945  370051 cri.go:89] found id: ""
	I0229 02:32:43.857973  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.857981  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:43.857991  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:43.858005  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:43.941290  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:43.941338  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:43.986788  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:43.986823  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:44.037384  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:44.037421  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:44.052668  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:44.052696  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:44.127124  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:43.990179  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:45.990921  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:47.991525  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:47.086821  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:49.585987  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:45.510273  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:48.009067  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:50.011776  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:46.627409  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:46.642306  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:46.642397  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:46.685358  370051 cri.go:89] found id: ""
	I0229 02:32:46.685389  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.685400  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:46.685431  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:46.685493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:46.724996  370051 cri.go:89] found id: ""
	I0229 02:32:46.725026  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.725035  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:46.725041  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:46.725113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:46.765815  370051 cri.go:89] found id: ""
	I0229 02:32:46.765849  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.765857  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:46.765863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:46.765924  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:46.808946  370051 cri.go:89] found id: ""
	I0229 02:32:46.808980  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.808991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:46.809000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:46.809068  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:46.865068  370051 cri.go:89] found id: ""
	I0229 02:32:46.865106  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.865119  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:46.865127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:46.865200  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:46.932233  370051 cri.go:89] found id: ""
	I0229 02:32:46.932260  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.932268  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:46.932275  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:46.932331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:46.985701  370051 cri.go:89] found id: ""
	I0229 02:32:46.985732  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.985744  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:46.985752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:46.985819  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:47.027497  370051 cri.go:89] found id: ""
	I0229 02:32:47.027524  370051 logs.go:276] 0 containers: []
	W0229 02:32:47.027536  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:47.027548  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:47.027565  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:47.075955  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:47.075990  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:47.093922  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:47.093949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:47.165000  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:47.165029  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:47.165046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:47.250161  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:47.250201  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:49.794654  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:49.809706  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:49.809787  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:49.868163  370051 cri.go:89] found id: ""
	I0229 02:32:49.868197  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.868217  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:49.868223  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:49.868277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:49.928462  370051 cri.go:89] found id: ""
	I0229 02:32:49.928495  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.928508  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:49.928516  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:49.928580  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:49.975725  370051 cri.go:89] found id: ""
	I0229 02:32:49.975755  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.975765  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:49.975774  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:49.975849  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:50.017007  370051 cri.go:89] found id: ""
	I0229 02:32:50.017036  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.017046  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:50.017051  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:50.017118  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:50.054522  370051 cri.go:89] found id: ""
	I0229 02:32:50.054551  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.054560  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:50.054566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:50.054620  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:50.096274  370051 cri.go:89] found id: ""
	I0229 02:32:50.096300  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.096308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:50.096319  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:50.096382  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:50.142543  370051 cri.go:89] found id: ""
	I0229 02:32:50.142581  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.142590  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:50.142597  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:50.142667  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:50.182452  370051 cri.go:89] found id: ""
	I0229 02:32:50.182482  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.182492  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:50.182505  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:50.182522  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:50.266311  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:50.266355  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:50.309277  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:50.309322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:50.360492  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:50.360536  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:50.376711  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:50.376744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:50.447128  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:49.992032  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.490801  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:51.586053  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:53.586268  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.510054  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:54.510975  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.947926  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:52.970209  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:52.970317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:53.010840  370051 cri.go:89] found id: ""
	I0229 02:32:53.010868  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.010878  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:53.010886  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:53.010983  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:53.049458  370051 cri.go:89] found id: ""
	I0229 02:32:53.049490  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.049503  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:53.049511  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:53.049578  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:53.088615  370051 cri.go:89] found id: ""
	I0229 02:32:53.088646  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.088656  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:53.088671  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:53.088738  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:53.130176  370051 cri.go:89] found id: ""
	I0229 02:32:53.130210  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.130237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:53.130247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:53.130317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:53.177876  370051 cri.go:89] found id: ""
	I0229 02:32:53.177908  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.177920  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:53.177928  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:53.177991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:53.216036  370051 cri.go:89] found id: ""
	I0229 02:32:53.216065  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.216074  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:53.216080  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:53.216143  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:53.254673  370051 cri.go:89] found id: ""
	I0229 02:32:53.254705  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.254716  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:53.254724  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:53.254785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:53.291508  370051 cri.go:89] found id: ""
	I0229 02:32:53.291539  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.291551  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:53.291564  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:53.291581  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:53.343312  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:53.343354  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:53.359264  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:53.359294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:53.431396  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:53.431428  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:53.431445  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:53.512494  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:53.512529  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:56.057340  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:56.073074  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:56.073158  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:56.111650  370051 cri.go:89] found id: ""
	I0229 02:32:56.111684  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.111704  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:56.111713  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:56.111785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:54.990490  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:56.991005  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:55.587290  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:58.086312  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:57.008288  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:59.011396  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:56.150147  370051 cri.go:89] found id: ""
	I0229 02:32:56.150178  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.150191  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:56.150200  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:56.150280  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:56.192842  370051 cri.go:89] found id: ""
	I0229 02:32:56.192878  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.192890  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:56.192898  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:56.192969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:56.232013  370051 cri.go:89] found id: ""
	I0229 02:32:56.232051  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.232062  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:56.232079  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:56.232151  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:56.273824  370051 cri.go:89] found id: ""
	I0229 02:32:56.273858  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.273871  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:56.273882  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:56.273949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:56.312112  370051 cri.go:89] found id: ""
	I0229 02:32:56.312139  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.312147  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:56.312153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:56.312203  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:56.352558  370051 cri.go:89] found id: ""
	I0229 02:32:56.352585  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.352593  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:56.352600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:56.352666  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:56.397719  370051 cri.go:89] found id: ""
	I0229 02:32:56.397762  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.397775  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:56.397790  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:56.397808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:56.447793  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:56.447831  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:56.463859  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:56.463894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:56.540306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:56.540333  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:56.540347  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:56.633201  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:56.633247  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.207459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:59.222165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:59.222271  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:59.261197  370051 cri.go:89] found id: ""
	I0229 02:32:59.261230  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.261242  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:59.261251  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:59.261338  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:59.300874  370051 cri.go:89] found id: ""
	I0229 02:32:59.300917  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.300940  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:59.300950  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:59.301025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:59.345399  370051 cri.go:89] found id: ""
	I0229 02:32:59.345435  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.345446  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:59.345455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:59.345525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:59.386068  370051 cri.go:89] found id: ""
	I0229 02:32:59.386102  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.386112  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:59.386132  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:59.386184  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:59.436597  370051 cri.go:89] found id: ""
	I0229 02:32:59.436629  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.436641  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:59.436649  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:59.436708  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:59.481417  370051 cri.go:89] found id: ""
	I0229 02:32:59.481446  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.481462  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:59.481469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:59.481535  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:59.527725  370051 cri.go:89] found id: ""
	I0229 02:32:59.527752  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.527763  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:59.527771  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:59.527845  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:59.574502  370051 cri.go:89] found id: ""
	I0229 02:32:59.574535  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.574547  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:59.574561  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:59.574579  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:59.669584  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:59.669630  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.730049  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:59.730096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:59.779562  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:59.779613  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:59.797016  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:59.797046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:59.876438  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:58.991584  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:01.489321  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:03.489615  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:00.585463  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:02.587986  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:04.588479  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:01.509980  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:04.009579  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:02.377144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:02.391585  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:02.391682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:02.432359  370051 cri.go:89] found id: ""
	I0229 02:33:02.432390  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.432399  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:02.432406  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:02.432462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:02.476733  370051 cri.go:89] found id: ""
	I0229 02:33:02.476768  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.476781  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:02.476790  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:02.476856  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:02.521414  370051 cri.go:89] found id: ""
	I0229 02:33:02.521440  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.521448  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:02.521454  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:02.521513  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:02.561663  370051 cri.go:89] found id: ""
	I0229 02:33:02.561690  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.561698  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:02.561704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:02.561755  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:02.611953  370051 cri.go:89] found id: ""
	I0229 02:33:02.611989  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.612002  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:02.612010  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:02.612079  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:02.663254  370051 cri.go:89] found id: ""
	I0229 02:33:02.663282  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.663290  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:02.663297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:02.663348  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:02.721449  370051 cri.go:89] found id: ""
	I0229 02:33:02.721484  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.721497  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:02.721506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:02.721579  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:02.761197  370051 cri.go:89] found id: ""
	I0229 02:33:02.761239  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.761251  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:02.761265  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:02.761282  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:02.810457  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:02.810498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:02.828906  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:02.828940  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:02.911895  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:02.911932  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:02.911945  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:02.995120  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:02.995152  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:05.544629  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:05.559266  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:05.559342  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:05.609673  370051 cri.go:89] found id: ""
	I0229 02:33:05.609706  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.609718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:05.609727  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:05.609795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:05.665161  370051 cri.go:89] found id: ""
	I0229 02:33:05.665192  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.665203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:05.665211  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:05.665282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:05.719923  370051 cri.go:89] found id: ""
	I0229 02:33:05.719949  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.719957  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:05.719963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:05.720025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:05.765189  370051 cri.go:89] found id: ""
	I0229 02:33:05.765224  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.765237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:05.765245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:05.765357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:05.803788  370051 cri.go:89] found id: ""
	I0229 02:33:05.803820  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.803829  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:05.803836  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:05.803909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:05.842152  370051 cri.go:89] found id: ""
	I0229 02:33:05.842178  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.842188  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:05.842197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:05.842278  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:05.885042  370051 cri.go:89] found id: ""
	I0229 02:33:05.885071  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.885084  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:05.885092  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:05.885156  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:05.926032  370051 cri.go:89] found id: ""
	I0229 02:33:05.926069  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.926082  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:05.926096  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:05.926112  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:06.014702  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:06.014744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:06.063510  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:06.063550  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:06.114215  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:06.114272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:06.130132  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:06.130169  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:33:05.490726  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:07.491068  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:07.085225  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:09.087524  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:06.508469  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:08.509399  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	W0229 02:33:06.205692  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:08.706549  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:08.722548  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:08.722614  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:08.768518  370051 cri.go:89] found id: ""
	I0229 02:33:08.768553  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.768564  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:08.768572  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:08.768630  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:08.804600  370051 cri.go:89] found id: ""
	I0229 02:33:08.804630  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.804643  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:08.804651  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:08.804721  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:08.842466  370051 cri.go:89] found id: ""
	I0229 02:33:08.842497  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.842510  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:08.842518  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:08.842589  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:08.878384  370051 cri.go:89] found id: ""
	I0229 02:33:08.878412  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.878421  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:08.878427  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:08.878484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:08.924228  370051 cri.go:89] found id: ""
	I0229 02:33:08.924262  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.924275  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:08.924295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:08.924374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:08.966122  370051 cri.go:89] found id: ""
	I0229 02:33:08.966157  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.966168  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:08.966177  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:08.966254  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:09.011109  370051 cri.go:89] found id: ""
	I0229 02:33:09.011135  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.011144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:09.011152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:09.011217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:09.059716  370051 cri.go:89] found id: ""
	I0229 02:33:09.059749  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.059782  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:09.059795  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:09.059812  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:09.110564  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:09.110599  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:09.126037  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:09.126065  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:09.199827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:09.199858  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:09.199892  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:09.282624  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:09.282661  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:09.990502  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.991783  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.586475  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:13.586740  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:10.511051  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:12.512644  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:15.009478  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.829017  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:11.842826  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:11.842894  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:11.881652  370051 cri.go:89] found id: ""
	I0229 02:33:11.881689  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.881700  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:11.881709  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:11.881773  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:11.919252  370051 cri.go:89] found id: ""
	I0229 02:33:11.919291  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.919302  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:11.919309  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:11.919380  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:11.959145  370051 cri.go:89] found id: ""
	I0229 02:33:11.959175  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.959187  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:11.959196  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:11.959263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:12.002105  370051 cri.go:89] found id: ""
	I0229 02:33:12.002134  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.002145  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:12.002153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:12.002219  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:12.042157  370051 cri.go:89] found id: ""
	I0229 02:33:12.042188  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.042221  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:12.042249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:12.042326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:12.080121  370051 cri.go:89] found id: ""
	I0229 02:33:12.080150  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.080158  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:12.080165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:12.080231  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:12.119259  370051 cri.go:89] found id: ""
	I0229 02:33:12.119286  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.119294  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:12.119301  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:12.119357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:12.160136  370051 cri.go:89] found id: ""
	I0229 02:33:12.160171  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.160182  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:12.160195  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:12.160209  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:12.209770  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:12.209810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:12.226429  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:12.226460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:12.295933  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:12.295966  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:12.295978  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:12.380794  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:12.380843  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:14.971692  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:14.986085  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:14.986162  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:15.024756  370051 cri.go:89] found id: ""
	I0229 02:33:15.024788  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.024801  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:15.024809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:15.024868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:15.065131  370051 cri.go:89] found id: ""
	I0229 02:33:15.065159  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.065172  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:15.065180  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:15.065251  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:15.104744  370051 cri.go:89] found id: ""
	I0229 02:33:15.104775  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.104786  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:15.104794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:15.104858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:15.145710  370051 cri.go:89] found id: ""
	I0229 02:33:15.145737  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.145745  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:15.145752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:15.145803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:15.184908  370051 cri.go:89] found id: ""
	I0229 02:33:15.184933  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.184942  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:15.184951  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:15.185016  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:15.230195  370051 cri.go:89] found id: ""
	I0229 02:33:15.230220  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.230241  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:15.230249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:15.230326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:15.269750  370051 cri.go:89] found id: ""
	I0229 02:33:15.269774  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.269783  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:15.269789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:15.269852  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:15.312331  370051 cri.go:89] found id: ""
	I0229 02:33:15.312360  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.312373  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:15.312387  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:15.312402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:15.363032  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:15.363067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:15.422421  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:15.422463  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:15.445235  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:15.445272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:15.530010  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:15.530047  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:15.530066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:14.489188  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:16.991028  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:16.090733  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:18.587045  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:17.510766  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:20.009379  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:18.116265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:18.130375  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:18.130439  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:18.167740  370051 cri.go:89] found id: ""
	I0229 02:33:18.167767  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.167776  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:18.167782  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:18.167843  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:18.205621  370051 cri.go:89] found id: ""
	I0229 02:33:18.205653  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.205662  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:18.205670  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:18.205725  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:18.246917  370051 cri.go:89] found id: ""
	I0229 02:33:18.246954  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.246975  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:18.246983  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:18.247040  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:18.285087  370051 cri.go:89] found id: ""
	I0229 02:33:18.285114  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.285123  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:18.285130  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:18.285181  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:18.323989  370051 cri.go:89] found id: ""
	I0229 02:33:18.324018  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.324027  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:18.324033  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:18.324094  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:18.372741  370051 cri.go:89] found id: ""
	I0229 02:33:18.372769  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.372779  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:18.372785  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:18.372838  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:18.432846  370051 cri.go:89] found id: ""
	I0229 02:33:18.432888  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.432900  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:18.432908  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:18.432977  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:18.486357  370051 cri.go:89] found id: ""
	I0229 02:33:18.486387  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.486399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:18.486411  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:18.486431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:18.532363  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:18.532402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:18.582035  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:18.582076  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:18.599009  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:18.599050  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:18.673580  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:18.673609  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:18.673625  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:19.490704  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.990251  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.085541  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:23.086148  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:22.009826  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:24.509388  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.259614  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:21.274150  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:21.274250  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:21.311859  370051 cri.go:89] found id: ""
	I0229 02:33:21.311895  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.311908  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:21.311917  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:21.311984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:21.364260  370051 cri.go:89] found id: ""
	I0229 02:33:21.364296  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.364309  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:21.364317  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:21.364391  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:21.424181  370051 cri.go:89] found id: ""
	I0229 02:33:21.424217  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.424229  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:21.424237  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:21.424306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:21.482499  370051 cri.go:89] found id: ""
	I0229 02:33:21.482531  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.482543  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:21.482551  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:21.482621  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:21.523743  370051 cri.go:89] found id: ""
	I0229 02:33:21.523775  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.523785  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:21.523793  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:21.523868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:21.563759  370051 cri.go:89] found id: ""
	I0229 02:33:21.563789  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.563800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:21.563809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:21.563889  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:21.610162  370051 cri.go:89] found id: ""
	I0229 02:33:21.610265  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.610286  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:21.610295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:21.610378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:21.652001  370051 cri.go:89] found id: ""
	I0229 02:33:21.652028  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.652037  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:21.652047  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:21.652060  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:21.704028  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:21.704067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:21.720924  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:21.720956  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:21.798619  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:21.798645  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:21.798664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:21.888445  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:21.888506  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.437647  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:24.459963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:24.460041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:24.503906  370051 cri.go:89] found id: ""
	I0229 02:33:24.503940  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.503950  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:24.503956  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:24.504031  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:24.541893  370051 cri.go:89] found id: ""
	I0229 02:33:24.541919  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.541929  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:24.541935  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:24.541991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:24.584717  370051 cri.go:89] found id: ""
	I0229 02:33:24.584748  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.584760  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:24.584769  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:24.584836  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:24.623334  370051 cri.go:89] found id: ""
	I0229 02:33:24.623362  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.623371  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:24.623378  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:24.623447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:24.665862  370051 cri.go:89] found id: ""
	I0229 02:33:24.665890  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.665902  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:24.665911  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:24.665984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:24.705509  370051 cri.go:89] found id: ""
	I0229 02:33:24.705540  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.705551  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:24.705560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:24.705634  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:24.745348  370051 cri.go:89] found id: ""
	I0229 02:33:24.745389  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.745399  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:24.745406  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:24.745462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:24.785490  370051 cri.go:89] found id: ""
	I0229 02:33:24.785520  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.785529  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:24.785539  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:24.785553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.829556  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:24.829589  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:24.877914  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:24.877949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:24.894590  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:24.894623  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:24.972948  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:24.972981  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:24.972997  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:23.990806  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:26.489823  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:25.586684  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:27.588321  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:26.509932  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:29.010692  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:27.555364  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:27.570747  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:27.570820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:27.609771  370051 cri.go:89] found id: ""
	I0229 02:33:27.609800  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.609807  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:27.609813  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:27.609863  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:27.654316  370051 cri.go:89] found id: ""
	I0229 02:33:27.654347  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.654360  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:27.654376  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:27.654453  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:27.695089  370051 cri.go:89] found id: ""
	I0229 02:33:27.695125  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.695137  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:27.695143  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:27.695199  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:27.733846  370051 cri.go:89] found id: ""
	I0229 02:33:27.733881  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.733893  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:27.733901  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:27.733972  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:27.772906  370051 cri.go:89] found id: ""
	I0229 02:33:27.772940  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.772953  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:27.772961  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:27.773039  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:27.812266  370051 cri.go:89] found id: ""
	I0229 02:33:27.812295  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.812308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:27.812316  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:27.812387  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:27.849272  370051 cri.go:89] found id: ""
	I0229 02:33:27.849305  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.849316  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:27.849324  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:27.849393  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:27.887495  370051 cri.go:89] found id: ""
	I0229 02:33:27.887528  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.887541  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:27.887554  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:27.887569  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:27.972220  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:27.972261  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:28.020757  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:28.020797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:28.070347  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:28.070381  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:28.089905  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:28.089947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:28.183306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:30.683857  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:30.701341  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:30.701443  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:30.741342  370051 cri.go:89] found id: ""
	I0229 02:33:30.741376  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.741387  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:30.741397  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:30.741475  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:30.785372  370051 cri.go:89] found id: ""
	I0229 02:33:30.785415  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.785427  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:30.785435  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:30.785506  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:30.828402  370051 cri.go:89] found id: ""
	I0229 02:33:30.828428  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.828436  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:30.828442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:30.828504  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:30.872656  370051 cri.go:89] found id: ""
	I0229 02:33:30.872684  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.872695  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:30.872704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:30.872770  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:30.918746  370051 cri.go:89] found id: ""
	I0229 02:33:30.918775  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.918786  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:30.918794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:30.918867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:30.956794  370051 cri.go:89] found id: ""
	I0229 02:33:30.956838  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.956852  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:30.956860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:30.956935  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:31.000595  370051 cri.go:89] found id: ""
	I0229 02:33:31.000618  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.000628  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:31.000637  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:31.000699  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:31.039060  370051 cri.go:89] found id: ""
	I0229 02:33:31.039089  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.039100  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:31.039111  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:31.039133  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:31.089919  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:31.089949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:31.110276  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:31.110315  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:33:28.990807  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:30.993882  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:33.489703  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:30.086658  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:32.586407  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:34.588272  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:31.509534  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:33.511710  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	W0229 02:33:31.235760  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:31.235791  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:31.235810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:31.323257  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:31.323322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:33.872956  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:33.887953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:33.888034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:33.927887  370051 cri.go:89] found id: ""
	I0229 02:33:33.927926  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.927938  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:33.927945  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:33.928001  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:33.967301  370051 cri.go:89] found id: ""
	I0229 02:33:33.967333  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.967345  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:33.967356  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:33.967425  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:34.009949  370051 cri.go:89] found id: ""
	I0229 02:33:34.009982  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.009992  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:34.009999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:34.010073  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:34.056197  370051 cri.go:89] found id: ""
	I0229 02:33:34.056224  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.056232  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:34.056239  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:34.056314  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:34.107089  370051 cri.go:89] found id: ""
	I0229 02:33:34.107120  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.107132  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:34.107140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:34.107206  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:34.162822  370051 cri.go:89] found id: ""
	I0229 02:33:34.162856  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.162875  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:34.162884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:34.162961  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:34.209963  370051 cri.go:89] found id: ""
	I0229 02:33:34.209993  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.210001  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:34.210008  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:34.210078  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:34.250688  370051 cri.go:89] found id: ""
	I0229 02:33:34.250726  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.250735  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:34.250754  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:34.250768  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:34.298953  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:34.298993  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:34.314067  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:34.314100  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:34.393515  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:34.393536  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:34.393551  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:34.477034  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:34.477078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:35.990175  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:38.490651  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:37.087261  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:39.588400  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:36.009933  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:38.508929  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:37.025152  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:37.040410  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:37.040491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:37.077922  370051 cri.go:89] found id: ""
	I0229 02:33:37.077953  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.077965  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:37.077973  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:37.078041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:37.137895  370051 cri.go:89] found id: ""
	I0229 02:33:37.137925  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.137938  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:37.137946  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:37.138012  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:37.199291  370051 cri.go:89] found id: ""
	I0229 02:33:37.199324  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.199336  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:37.199344  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:37.199422  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:37.242817  370051 cri.go:89] found id: ""
	I0229 02:33:37.242848  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.242857  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:37.242863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:37.242917  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:37.282171  370051 cri.go:89] found id: ""
	I0229 02:33:37.282196  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.282204  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:37.282211  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:37.282284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:37.328608  370051 cri.go:89] found id: ""
	I0229 02:33:37.328639  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.328647  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:37.328658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:37.328724  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:37.372965  370051 cri.go:89] found id: ""
	I0229 02:33:37.372996  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.373008  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:37.373016  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:37.373091  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:37.417597  370051 cri.go:89] found id: ""
	I0229 02:33:37.417630  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.417642  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:37.417655  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:37.417673  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:37.472023  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:37.472058  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:37.487931  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:37.487961  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:37.568196  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:37.568227  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:37.568245  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:37.658485  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:37.658523  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.203039  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:40.220385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:40.220477  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:40.262962  370051 cri.go:89] found id: ""
	I0229 02:33:40.262993  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.263004  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:40.263016  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:40.263086  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:40.302452  370051 cri.go:89] found id: ""
	I0229 02:33:40.302483  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.302495  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:40.302503  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:40.302560  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:40.342509  370051 cri.go:89] found id: ""
	I0229 02:33:40.342544  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.342557  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:40.342566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:40.342644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:40.385585  370051 cri.go:89] found id: ""
	I0229 02:33:40.385615  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.385629  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:40.385638  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:40.385703  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:40.426839  370051 cri.go:89] found id: ""
	I0229 02:33:40.426874  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.426887  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:40.426896  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:40.426962  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:40.467217  370051 cri.go:89] found id: ""
	I0229 02:33:40.467241  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.467251  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:40.467257  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:40.467332  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:40.513525  370051 cri.go:89] found id: ""
	I0229 02:33:40.513546  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.513553  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:40.513559  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:40.513609  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:40.554187  370051 cri.go:89] found id: ""
	I0229 02:33:40.554256  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.554269  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:40.554282  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:40.554301  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:40.636447  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:40.636477  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:40.636494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:40.716381  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:40.716423  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.761946  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:40.761982  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:40.812828  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:40.812862  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:40.492178  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.991517  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.086413  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:44.586663  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:40.510266  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.510702  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:45.013362  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:43.336139  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:43.352278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:43.352361  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:43.392555  370051 cri.go:89] found id: ""
	I0229 02:33:43.392593  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.392607  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:43.392616  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:43.392689  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:43.438169  370051 cri.go:89] found id: ""
	I0229 02:33:43.438202  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.438216  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:43.438242  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:43.438331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:43.476987  370051 cri.go:89] found id: ""
	I0229 02:33:43.477021  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.477033  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:43.477042  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:43.477109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:43.526728  370051 cri.go:89] found id: ""
	I0229 02:33:43.526758  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.526767  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:43.526778  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:43.526833  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:43.572222  370051 cri.go:89] found id: ""
	I0229 02:33:43.572260  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.572273  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:43.572282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:43.572372  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:43.618650  370051 cri.go:89] found id: ""
	I0229 02:33:43.618679  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.618691  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:43.618698  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:43.618764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:43.658069  370051 cri.go:89] found id: ""
	I0229 02:33:43.658104  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.658116  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:43.658126  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:43.658196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:43.700790  370051 cri.go:89] found id: ""
	I0229 02:33:43.700829  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.700841  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:43.700855  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:43.700874  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:43.753330  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:43.753372  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:43.770261  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:43.770294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:43.842407  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:43.842430  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:43.842447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:43.935427  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:43.935470  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:45.490296  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.490514  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.088903  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:49.585902  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.510105  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:49.511420  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:46.498694  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:46.516463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:46.516541  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:46.554731  370051 cri.go:89] found id: ""
	I0229 02:33:46.554757  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.554766  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:46.554772  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:46.554835  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:46.596851  370051 cri.go:89] found id: ""
	I0229 02:33:46.596892  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.596905  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:46.596912  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:46.596981  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:46.634978  370051 cri.go:89] found id: ""
	I0229 02:33:46.635008  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.635017  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:46.635024  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:46.635089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:46.675302  370051 cri.go:89] found id: ""
	I0229 02:33:46.675334  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.675347  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:46.675355  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:46.675423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:46.717366  370051 cri.go:89] found id: ""
	I0229 02:33:46.717402  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.717413  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:46.717421  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:46.717484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:46.756130  370051 cri.go:89] found id: ""
	I0229 02:33:46.756160  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.756169  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:46.756176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:46.756228  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:46.794283  370051 cri.go:89] found id: ""
	I0229 02:33:46.794312  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.794320  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:46.794328  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:46.794384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:46.836646  370051 cri.go:89] found id: ""
	I0229 02:33:46.836679  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.836691  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:46.836703  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:46.836721  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:46.926532  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:46.926578  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:46.981883  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:46.981915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:47.033571  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:47.033612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:47.049803  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:47.049833  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:47.123389  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:49.623827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:49.638175  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:49.638263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:49.675895  370051 cri.go:89] found id: ""
	I0229 02:33:49.675929  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.675941  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:49.675950  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:49.676009  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:49.720679  370051 cri.go:89] found id: ""
	I0229 02:33:49.720718  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.720730  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:49.720739  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:49.720808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:49.762299  370051 cri.go:89] found id: ""
	I0229 02:33:49.762329  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.762342  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:49.762350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:49.762426  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:49.809330  370051 cri.go:89] found id: ""
	I0229 02:33:49.809364  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.809376  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:49.809391  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:49.809455  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:49.859176  370051 cri.go:89] found id: ""
	I0229 02:33:49.859206  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.859218  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:49.859226  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:49.859292  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:49.914844  370051 cri.go:89] found id: ""
	I0229 02:33:49.914877  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.914890  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:49.914897  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:49.914967  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:49.969640  370051 cri.go:89] found id: ""
	I0229 02:33:49.969667  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.969676  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:49.969682  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:49.969736  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:50.010924  370051 cri.go:89] found id: ""
	I0229 02:33:50.010953  370051 logs.go:276] 0 containers: []
	W0229 02:33:50.010965  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:50.010976  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:50.011002  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:50.089462  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:50.089494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:50.132098  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:50.132129  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:50.182693  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:50.182737  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:50.198209  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:50.198256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:50.281521  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:49.991831  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:52.489891  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:51.586298  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:53.587249  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:51.513176  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:54.010209  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:52.781677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:52.795962  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:52.796055  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:52.833670  370051 cri.go:89] found id: ""
	I0229 02:33:52.833706  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.833718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:52.833728  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:52.833795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:52.889497  370051 cri.go:89] found id: ""
	I0229 02:33:52.889529  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.889539  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:52.889547  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:52.889616  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:52.952880  370051 cri.go:89] found id: ""
	I0229 02:33:52.952915  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.952927  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:52.952935  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:52.953002  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:53.008380  370051 cri.go:89] found id: ""
	I0229 02:33:53.008409  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.008420  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:53.008434  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:53.008502  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:53.047877  370051 cri.go:89] found id: ""
	I0229 02:33:53.047911  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.047922  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:53.047931  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:53.047999  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:53.086080  370051 cri.go:89] found id: ""
	I0229 02:33:53.086107  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.086118  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:53.086127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:53.086193  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:53.128334  370051 cri.go:89] found id: ""
	I0229 02:33:53.128368  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.128378  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:53.128385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:53.128457  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:53.172201  370051 cri.go:89] found id: ""
	I0229 02:33:53.172232  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.172245  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:53.172258  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:53.172275  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:53.222608  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:53.222648  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:53.239888  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:53.239918  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:53.315827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:53.315850  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:53.315864  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:53.395457  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:53.395498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:55.943418  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:55.960562  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:55.960638  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:56.005181  370051 cri.go:89] found id: ""
	I0229 02:33:56.005210  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.005221  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:56.005229  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:56.005293  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:56.046700  370051 cri.go:89] found id: ""
	I0229 02:33:56.046731  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.046743  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:56.046750  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:56.046814  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:56.088459  370051 cri.go:89] found id: ""
	I0229 02:33:56.088486  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.088497  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:56.088505  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:56.088571  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:56.127729  370051 cri.go:89] found id: ""
	I0229 02:33:56.127762  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.127774  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:56.127783  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:56.127862  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:54.491536  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.493973  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.089188  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:58.586570  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.011539  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:58.509708  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.169980  370051 cri.go:89] found id: ""
	I0229 02:33:56.170011  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.170022  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:56.170030  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:56.170098  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:56.210650  370051 cri.go:89] found id: ""
	I0229 02:33:56.210682  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.210694  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:56.210704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:56.210771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:56.247342  370051 cri.go:89] found id: ""
	I0229 02:33:56.247380  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.247391  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:56.247400  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:56.247474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:56.286322  370051 cri.go:89] found id: ""
	I0229 02:33:56.286353  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.286364  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:56.286375  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:56.286393  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:56.335144  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:56.335184  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:56.351322  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:56.351359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:56.424251  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:56.424282  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:56.424299  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:56.506053  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:56.506082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.052805  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:59.067508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:59.067599  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:59.114213  370051 cri.go:89] found id: ""
	I0229 02:33:59.114256  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.114268  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:59.114276  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:59.114327  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:59.161087  370051 cri.go:89] found id: ""
	I0229 02:33:59.161123  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.161136  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:59.161145  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:59.161217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:59.206071  370051 cri.go:89] found id: ""
	I0229 02:33:59.206101  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.206114  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:59.206122  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:59.206196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:59.245152  370051 cri.go:89] found id: ""
	I0229 02:33:59.245179  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.245188  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:59.245194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:59.245247  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:59.286047  370051 cri.go:89] found id: ""
	I0229 02:33:59.286080  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.286092  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:59.286101  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:59.286165  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:59.323171  370051 cri.go:89] found id: ""
	I0229 02:33:59.323203  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.323214  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:59.323222  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:59.323288  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:59.364434  370051 cri.go:89] found id: ""
	I0229 02:33:59.364464  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.364477  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:59.364485  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:59.364554  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:59.405902  370051 cri.go:89] found id: ""
	I0229 02:33:59.405929  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.405938  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:59.405948  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:59.405980  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:59.481810  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:59.481841  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:59.481858  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:59.575726  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:59.575767  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.634808  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:59.634849  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:59.702513  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:59.702552  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:58.991152  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:01.490426  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:00.587747  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:02.594677  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:01.010009  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:03.509687  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:02.219660  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:02.234037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:02.234105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:02.277956  370051 cri.go:89] found id: ""
	I0229 02:34:02.277982  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.277991  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:02.277998  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:02.278071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:02.322832  370051 cri.go:89] found id: ""
	I0229 02:34:02.322856  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.322869  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:02.322878  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:02.322949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:02.368612  370051 cri.go:89] found id: ""
	I0229 02:34:02.368646  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.368659  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:02.368668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:02.368731  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:02.412436  370051 cri.go:89] found id: ""
	I0229 02:34:02.412466  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.412479  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:02.412486  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:02.412544  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:02.448682  370051 cri.go:89] found id: ""
	I0229 02:34:02.448713  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.448724  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:02.448733  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:02.448803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:02.486676  370051 cri.go:89] found id: ""
	I0229 02:34:02.486705  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.486723  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:02.486730  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:02.486795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:02.531814  370051 cri.go:89] found id: ""
	I0229 02:34:02.531841  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.531852  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:02.531860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:02.531934  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:02.569800  370051 cri.go:89] found id: ""
	I0229 02:34:02.569835  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.569845  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:02.569857  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:02.569871  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:02.623903  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:02.623937  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:02.643856  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:02.643884  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:02.735520  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:02.735544  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:02.735563  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:02.816572  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:02.816612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.371459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:05.385179  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:05.385255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:05.424653  370051 cri.go:89] found id: ""
	I0229 02:34:05.424679  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.424687  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:05.424694  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:05.424752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:05.463726  370051 cri.go:89] found id: ""
	I0229 02:34:05.463754  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.463763  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:05.463769  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:05.463823  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:05.510367  370051 cri.go:89] found id: ""
	I0229 02:34:05.510396  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.510407  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:05.510415  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:05.510480  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:05.548421  370051 cri.go:89] found id: ""
	I0229 02:34:05.548445  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.548455  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:05.548461  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:05.548527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:05.588778  370051 cri.go:89] found id: ""
	I0229 02:34:05.588801  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.588809  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:05.588815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:05.588875  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:05.638449  370051 cri.go:89] found id: ""
	I0229 02:34:05.638479  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.638490  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:05.638506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:05.638567  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:05.709921  370051 cri.go:89] found id: ""
	I0229 02:34:05.709950  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.709964  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:05.709972  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:05.710038  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:05.756965  370051 cri.go:89] found id: ""
	I0229 02:34:05.756992  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.757000  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:05.757010  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:05.757025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:05.826878  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:05.826904  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:05.826921  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:05.909205  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:05.909256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.954537  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:05.954594  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:06.004157  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:06.004203  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:03.989381  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.990323  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.491379  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.086296  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:07.586477  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.511758  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.009545  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:10.010247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.522975  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:08.539247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:08.539326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:08.579776  370051 cri.go:89] found id: ""
	I0229 02:34:08.579806  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.579817  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:08.579826  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:08.579890  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:08.628415  370051 cri.go:89] found id: ""
	I0229 02:34:08.628444  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.628456  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:08.628468  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:08.628534  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:08.690499  370051 cri.go:89] found id: ""
	I0229 02:34:08.690530  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.690540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:08.690547  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:08.690613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:08.739755  370051 cri.go:89] found id: ""
	I0229 02:34:08.739788  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.739801  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:08.739809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:08.739906  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:08.781693  370051 cri.go:89] found id: ""
	I0229 02:34:08.781721  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.781733  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:08.781742  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:08.781808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:08.818605  370051 cri.go:89] found id: ""
	I0229 02:34:08.818637  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.818645  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:08.818652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:08.818713  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:08.861533  370051 cri.go:89] found id: ""
	I0229 02:34:08.861559  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.861569  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:08.861578  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:08.861658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:08.902727  370051 cri.go:89] found id: ""
	I0229 02:34:08.902758  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.902771  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:08.902784  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:08.902801  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:08.948527  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:08.948567  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:08.999883  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:08.999916  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:09.015438  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:09.015467  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:09.087965  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:09.087994  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:09.088010  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:10.990135  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.991074  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:10.085517  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.086653  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:14.086817  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.510247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:15.010412  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:11.671443  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:11.702197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:11.702322  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:11.755104  370051 cri.go:89] found id: ""
	I0229 02:34:11.755136  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.755147  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:11.755153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:11.755204  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:11.794190  370051 cri.go:89] found id: ""
	I0229 02:34:11.794218  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.794239  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:11.794247  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:11.794310  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:11.837330  370051 cri.go:89] found id: ""
	I0229 02:34:11.837360  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.837372  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:11.837380  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:11.837447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:11.876682  370051 cri.go:89] found id: ""
	I0229 02:34:11.876716  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.876726  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:11.876734  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:11.876805  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:11.922172  370051 cri.go:89] found id: ""
	I0229 02:34:11.922239  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.922262  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:11.922271  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:11.922341  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:11.962218  370051 cri.go:89] found id: ""
	I0229 02:34:11.962270  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.962283  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:11.962291  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:11.962375  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:12.002075  370051 cri.go:89] found id: ""
	I0229 02:34:12.002101  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.002110  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:12.002117  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:12.002169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:12.043337  370051 cri.go:89] found id: ""
	I0229 02:34:12.043378  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.043399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:12.043412  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:12.043428  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:12.094458  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:12.094491  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:12.112374  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:12.112401  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:12.193665  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:12.193689  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:12.193717  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:12.282510  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:12.282553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:14.828451  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:14.843626  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:14.843690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:14.884181  370051 cri.go:89] found id: ""
	I0229 02:34:14.884214  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.884226  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:14.884235  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:14.884302  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:14.926312  370051 cri.go:89] found id: ""
	I0229 02:34:14.926347  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.926361  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:14.926369  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:14.926436  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:14.969147  370051 cri.go:89] found id: ""
	I0229 02:34:14.969182  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.969195  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:14.969207  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:14.969277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:15.013000  370051 cri.go:89] found id: ""
	I0229 02:34:15.013045  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.013055  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:15.013064  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:15.013120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:15.055811  370051 cri.go:89] found id: ""
	I0229 02:34:15.055849  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.055861  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:15.055869  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:15.055939  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:15.100736  370051 cri.go:89] found id: ""
	I0229 02:34:15.100768  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.100780  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:15.100789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:15.100867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:15.140115  370051 cri.go:89] found id: ""
	I0229 02:34:15.140151  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.140164  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:15.140172  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:15.140239  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:15.183545  370051 cri.go:89] found id: ""
	I0229 02:34:15.183576  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.183588  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:15.183602  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:15.183621  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:15.258646  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:15.258676  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:15.258693  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:15.347035  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:15.347082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:15.407148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:15.407178  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:15.466695  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:15.466741  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:15.490797  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.990851  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:16.585993  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:18.587604  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.509114  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:19.509856  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.989102  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:18.005052  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:18.005126  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:18.044687  370051 cri.go:89] found id: ""
	I0229 02:34:18.044714  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.044725  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:18.044739  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:18.044815  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:18.085904  370051 cri.go:89] found id: ""
	I0229 02:34:18.085934  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.085944  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:18.085952  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:18.086017  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:18.129958  370051 cri.go:89] found id: ""
	I0229 02:34:18.129985  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.129994  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:18.129999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:18.130052  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:18.166942  370051 cri.go:89] found id: ""
	I0229 02:34:18.166979  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.166991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:18.167000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:18.167056  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:18.205297  370051 cri.go:89] found id: ""
	I0229 02:34:18.205324  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.205331  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:18.205337  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:18.205410  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:18.246415  370051 cri.go:89] found id: ""
	I0229 02:34:18.246448  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.246461  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:18.246469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:18.246527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:18.285534  370051 cri.go:89] found id: ""
	I0229 02:34:18.285573  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.285585  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:18.285600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:18.285662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:18.327624  370051 cri.go:89] found id: ""
	I0229 02:34:18.327651  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.327659  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:18.327670  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:18.327684  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:18.383307  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:18.383351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:18.408127  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:18.408162  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:18.502036  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:18.502070  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:18.502093  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:18.582289  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:18.582340  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:20.490582  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:22.990210  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.086446  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:23.586600  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.511411  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:24.009976  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.135649  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:21.149411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:21.149498  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:21.198246  370051 cri.go:89] found id: ""
	I0229 02:34:21.198286  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.198298  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:21.198306  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:21.198378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:21.240168  370051 cri.go:89] found id: ""
	I0229 02:34:21.240195  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.240203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:21.240209  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:21.240275  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:21.281243  370051 cri.go:89] found id: ""
	I0229 02:34:21.281277  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.281288  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:21.281296  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:21.281359  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:21.321573  370051 cri.go:89] found id: ""
	I0229 02:34:21.321609  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.321621  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:21.321629  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:21.321693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:21.375156  370051 cri.go:89] found id: ""
	I0229 02:34:21.375212  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.375226  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:21.375234  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:21.375308  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:21.430450  370051 cri.go:89] found id: ""
	I0229 02:34:21.430487  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.430499  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:21.430508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:21.430576  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:21.475095  370051 cri.go:89] found id: ""
	I0229 02:34:21.475124  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.475135  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:21.475144  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:21.475215  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:21.517378  370051 cri.go:89] found id: ""
	I0229 02:34:21.517403  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.517412  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:21.517424  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:21.517444  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:21.534103  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:21.534147  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:21.608375  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:21.608400  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:21.608412  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:21.691912  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:21.691950  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:21.744366  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:21.744406  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.295384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:24.309456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:24.309539  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:24.370125  370051 cri.go:89] found id: ""
	I0229 02:34:24.370156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.370167  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:24.370175  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:24.370256  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:24.439458  370051 cri.go:89] found id: ""
	I0229 02:34:24.439487  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.439499  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:24.439506  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:24.439639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:24.478070  370051 cri.go:89] found id: ""
	I0229 02:34:24.478105  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.478119  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:24.478127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:24.478194  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:24.517128  370051 cri.go:89] found id: ""
	I0229 02:34:24.517156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.517168  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:24.517176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:24.517243  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:24.555502  370051 cri.go:89] found id: ""
	I0229 02:34:24.555537  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.555549  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:24.555557  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:24.555625  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:24.601261  370051 cri.go:89] found id: ""
	I0229 02:34:24.601295  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.601307  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:24.601315  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:24.601389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:24.639110  370051 cri.go:89] found id: ""
	I0229 02:34:24.639141  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.639153  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:24.639161  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:24.639224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:24.681448  370051 cri.go:89] found id: ""
	I0229 02:34:24.681478  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.681487  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:24.681498  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:24.681517  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.730735  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:24.730775  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:24.746996  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:24.747031  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:24.827581  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:24.827608  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:24.827628  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:24.909551  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:24.909596  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:24.990581  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.489787  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:25.586672  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.586999  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:26.509819  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:29.009014  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.455967  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:27.477411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:27.477487  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:27.523163  370051 cri.go:89] found id: ""
	I0229 02:34:27.523189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.523198  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:27.523203  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:27.523258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:27.562298  370051 cri.go:89] found id: ""
	I0229 02:34:27.562330  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.562343  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:27.562350  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:27.562420  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:27.603506  370051 cri.go:89] found id: ""
	I0229 02:34:27.603532  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.603540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:27.603554  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:27.603619  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:27.646971  370051 cri.go:89] found id: ""
	I0229 02:34:27.647002  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.647014  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:27.647031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:27.647109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:27.685124  370051 cri.go:89] found id: ""
	I0229 02:34:27.685149  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.685160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:27.685169  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:27.685235  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:27.726976  370051 cri.go:89] found id: ""
	I0229 02:34:27.727007  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.727018  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:27.727026  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:27.727089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:27.767159  370051 cri.go:89] found id: ""
	I0229 02:34:27.767189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.767197  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:27.767204  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:27.767272  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:27.810377  370051 cri.go:89] found id: ""
	I0229 02:34:27.810411  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.810420  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:27.810431  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:27.810447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:27.858094  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:27.858136  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:27.874407  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:27.874440  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:27.953065  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:27.953092  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:27.953108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:28.042244  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:28.042278  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:30.588227  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:30.604954  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:30.605037  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:30.642069  370051 cri.go:89] found id: ""
	I0229 02:34:30.642100  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.642108  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:30.642119  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:30.642187  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:30.686212  370051 cri.go:89] found id: ""
	I0229 02:34:30.686264  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.686277  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:30.686285  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:30.686364  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:30.726668  370051 cri.go:89] found id: ""
	I0229 02:34:30.726702  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.726715  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:30.726723  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:30.726788  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:30.766850  370051 cri.go:89] found id: ""
	I0229 02:34:30.766883  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.766895  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:30.766904  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:30.766979  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:30.808972  370051 cri.go:89] found id: ""
	I0229 02:34:30.809002  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.809015  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:30.809023  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:30.809093  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:30.851992  370051 cri.go:89] found id: ""
	I0229 02:34:30.852016  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.852025  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:30.852031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:30.852096  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:30.891100  370051 cri.go:89] found id: ""
	I0229 02:34:30.891132  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.891144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:30.891157  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:30.891227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:30.931740  370051 cri.go:89] found id: ""
	I0229 02:34:30.931768  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.931777  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:30.931787  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:30.931808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:31.010896  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:31.010919  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:31.010936  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:31.094626  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:31.094662  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:29.490211  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.490659  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:30.086898  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:32.587485  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.010003  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:33.510267  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.150765  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:31.150804  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:31.202932  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:31.202976  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:33.723355  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:33.738651  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:33.738753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:33.778255  370051 cri.go:89] found id: ""
	I0229 02:34:33.778287  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.778299  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:33.778307  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:33.778384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:33.818360  370051 cri.go:89] found id: ""
	I0229 02:34:33.818396  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.818406  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:33.818412  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:33.818564  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:33.866781  370051 cri.go:89] found id: ""
	I0229 02:34:33.866814  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.866824  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:33.866831  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:33.866891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:33.910013  370051 cri.go:89] found id: ""
	I0229 02:34:33.910051  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.910063  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:33.910072  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:33.910146  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:33.956068  370051 cri.go:89] found id: ""
	I0229 02:34:33.956098  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.956106  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:33.956113  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:33.956170  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:34.004997  370051 cri.go:89] found id: ""
	I0229 02:34:34.005027  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.005038  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:34.005047  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:34.005113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:34.059266  370051 cri.go:89] found id: ""
	I0229 02:34:34.059293  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.059302  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:34.059307  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:34.059363  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:34.105601  370051 cri.go:89] found id: ""
	I0229 02:34:34.105631  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.105643  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:34.105654  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:34.105669  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:34.208723  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:34.208764  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:34.262105  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:34.262137  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:34.314528  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:34.314571  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:34.332441  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:34.332477  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:34.406303  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:33.990257  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.490844  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:35.085482  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:37.086532  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:39.087022  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.015574  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:38.510064  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.906814  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:36.922297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:36.922377  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:36.967550  370051 cri.go:89] found id: ""
	I0229 02:34:36.967578  370051 logs.go:276] 0 containers: []
	W0229 02:34:36.967589  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:36.967599  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:36.967662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:37.007589  370051 cri.go:89] found id: ""
	I0229 02:34:37.007614  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.007624  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:37.007632  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:37.007706  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:37.048230  370051 cri.go:89] found id: ""
	I0229 02:34:37.048260  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.048273  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:37.048281  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:37.048354  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:37.089329  370051 cri.go:89] found id: ""
	I0229 02:34:37.089355  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.089365  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:37.089373  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:37.089441  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:37.144654  370051 cri.go:89] found id: ""
	I0229 02:34:37.144687  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.144699  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:37.144708  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:37.144778  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:37.203822  370051 cri.go:89] found id: ""
	I0229 02:34:37.203857  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.203868  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:37.203876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:37.203948  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:37.250369  370051 cri.go:89] found id: ""
	I0229 02:34:37.250398  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.250410  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:37.250417  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:37.250490  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:37.290924  370051 cri.go:89] found id: ""
	I0229 02:34:37.290957  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.290969  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:37.290981  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:37.290995  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:37.343878  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:37.343920  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:37.359307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:37.359336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:37.435264  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:37.435292  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:37.435309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:37.518274  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:37.518309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:40.062232  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:40.079883  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:40.079957  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:40.123826  370051 cri.go:89] found id: ""
	I0229 02:34:40.123856  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.123866  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:40.123874  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:40.123943  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:40.190273  370051 cri.go:89] found id: ""
	I0229 02:34:40.190321  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.190332  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:40.190338  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:40.190395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:40.232921  370051 cri.go:89] found id: ""
	I0229 02:34:40.232949  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.232961  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:40.232968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:40.233034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:40.273490  370051 cri.go:89] found id: ""
	I0229 02:34:40.273517  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.273526  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:40.273538  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:40.273594  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:40.317121  370051 cri.go:89] found id: ""
	I0229 02:34:40.317152  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.317163  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:40.317171  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:40.317230  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:40.363347  370051 cri.go:89] found id: ""
	I0229 02:34:40.363380  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.363389  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:40.363396  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:40.363459  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:40.407187  370051 cri.go:89] found id: ""
	I0229 02:34:40.407213  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.407222  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:40.407231  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:40.407282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:40.447185  370051 cri.go:89] found id: ""
	I0229 02:34:40.447218  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.447229  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:40.447242  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:40.447258  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:40.496998  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:40.497029  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:40.512520  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:40.512549  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:40.589150  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:40.589173  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:40.589190  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:40.677054  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:40.677096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:38.991307  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:40.992688  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.490195  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:41.585962  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.586942  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:41.009837  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.510138  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.222265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:43.236567  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:43.236629  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:43.282917  370051 cri.go:89] found id: ""
	I0229 02:34:43.282959  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.282976  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:43.282982  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:43.283049  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:43.329273  370051 cri.go:89] found id: ""
	I0229 02:34:43.329302  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.329313  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:43.329321  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:43.329386  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:43.366696  370051 cri.go:89] found id: ""
	I0229 02:34:43.366723  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.366732  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:43.366739  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:43.366800  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:43.405793  370051 cri.go:89] found id: ""
	I0229 02:34:43.405820  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.405828  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:43.405834  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:43.405888  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:43.442870  370051 cri.go:89] found id: ""
	I0229 02:34:43.442898  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.442906  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:43.442912  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:43.442964  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:43.484581  370051 cri.go:89] found id: ""
	I0229 02:34:43.484615  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.484626  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:43.484635  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:43.484702  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:43.530931  370051 cri.go:89] found id: ""
	I0229 02:34:43.530954  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.530963  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:43.530968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:43.531024  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:43.572810  370051 cri.go:89] found id: ""
	I0229 02:34:43.572838  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.572850  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:43.572867  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:43.572883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:43.622815  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:43.622854  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:43.637972  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:43.638012  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:43.713704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:43.713728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:43.713746  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:43.797178  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:43.797220  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:45.490670  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:47.989828  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:45.587464  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:48.090384  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:46.009454  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:48.010403  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:46.347159  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:46.361601  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:46.361682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:46.399751  370051 cri.go:89] found id: ""
	I0229 02:34:46.399784  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.399795  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:46.399804  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:46.399870  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:46.445367  370051 cri.go:89] found id: ""
	I0229 02:34:46.445398  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.445407  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:46.445413  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:46.445486  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:46.490323  370051 cri.go:89] found id: ""
	I0229 02:34:46.490363  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.490385  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:46.490393  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:46.490473  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:46.531406  370051 cri.go:89] found id: ""
	I0229 02:34:46.531441  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.531450  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:46.531456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:46.531507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:46.572759  370051 cri.go:89] found id: ""
	I0229 02:34:46.572787  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.572795  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:46.572804  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:46.572908  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:46.613055  370051 cri.go:89] found id: ""
	I0229 02:34:46.613083  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.613093  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:46.613099  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:46.613153  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:46.657504  370051 cri.go:89] found id: ""
	I0229 02:34:46.657536  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.657544  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:46.657550  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:46.657605  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:46.698008  370051 cri.go:89] found id: ""
	I0229 02:34:46.698057  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.698068  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:46.698080  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:46.698097  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:46.746648  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:46.746682  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:46.761190  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:46.761219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:46.843379  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:46.843403  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:46.843415  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:46.933493  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:46.933546  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:49.491837  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:49.508647  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:49.508717  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:49.550752  370051 cri.go:89] found id: ""
	I0229 02:34:49.550788  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.550800  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:49.550809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:49.550883  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:49.597623  370051 cri.go:89] found id: ""
	I0229 02:34:49.597663  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.597675  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:49.597683  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:49.597764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:49.635207  370051 cri.go:89] found id: ""
	I0229 02:34:49.635230  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.635238  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:49.635282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:49.635336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:49.674664  370051 cri.go:89] found id: ""
	I0229 02:34:49.674696  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.674708  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:49.674716  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:49.674777  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:49.715391  370051 cri.go:89] found id: ""
	I0229 02:34:49.715420  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.715433  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:49.715442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:49.715497  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:49.753318  370051 cri.go:89] found id: ""
	I0229 02:34:49.753352  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.753373  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:49.753382  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:49.753451  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:49.791342  370051 cri.go:89] found id: ""
	I0229 02:34:49.791369  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.791377  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:49.791384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:49.791456  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:49.838148  370051 cri.go:89] found id: ""
	I0229 02:34:49.838181  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.838191  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:49.838204  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:49.838244  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:49.891532  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:49.891568  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:49.917625  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:49.917664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:50.019436  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:50.019457  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:50.019472  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:50.108302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:50.108349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:49.991272  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.491139  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:50.586652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.586940  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:50.509504  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:53.010818  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.654561  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:52.668331  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:52.668402  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:52.718431  370051 cri.go:89] found id: ""
	I0229 02:34:52.718471  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.718484  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:52.718493  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:52.718551  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:52.757913  370051 cri.go:89] found id: ""
	I0229 02:34:52.757946  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.757957  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:52.757965  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:52.758035  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:52.796792  370051 cri.go:89] found id: ""
	I0229 02:34:52.796821  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.796833  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:52.796842  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:52.796913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:52.832157  370051 cri.go:89] found id: ""
	I0229 02:34:52.832187  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.832196  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:52.832203  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:52.832264  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:52.879170  370051 cri.go:89] found id: ""
	I0229 02:34:52.879197  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.879206  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:52.879212  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:52.879265  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:52.924219  370051 cri.go:89] found id: ""
	I0229 02:34:52.924249  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.924258  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:52.924264  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:52.924318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:52.980422  370051 cri.go:89] found id: ""
	I0229 02:34:52.980450  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.980457  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:52.980463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:52.980525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:53.026393  370051 cri.go:89] found id: ""
	I0229 02:34:53.026418  370051 logs.go:276] 0 containers: []
	W0229 02:34:53.026426  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:53.026436  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:53.026453  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:53.075135  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:53.075174  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:53.092197  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:53.092223  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:53.164397  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:53.164423  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:53.164439  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:53.250310  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:53.250366  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:55.792993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:55.807152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:55.807229  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:55.867791  370051 cri.go:89] found id: ""
	I0229 02:34:55.867821  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.867830  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:55.867847  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:55.867925  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:55.922960  370051 cri.go:89] found id: ""
	I0229 02:34:55.922989  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.923001  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:55.923009  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:55.923076  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:55.972510  370051 cri.go:89] found id: ""
	I0229 02:34:55.972541  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.972552  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:55.972560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:55.972632  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:56.011948  370051 cri.go:89] found id: ""
	I0229 02:34:56.011980  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.011990  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:56.011999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:56.012077  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:56.052624  370051 cri.go:89] found id: ""
	I0229 02:34:56.052653  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.052662  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:56.052668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:56.052722  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:56.089075  370051 cri.go:89] found id: ""
	I0229 02:34:56.089100  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.089108  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:56.089114  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:56.089180  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:56.130369  370051 cri.go:89] found id: ""
	I0229 02:34:56.130403  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.130416  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:56.130424  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:56.130496  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:54.989569  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:56.991424  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:55.085652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:57.585291  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:59.586439  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:55.509734  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:57.510165  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:59.511749  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:56.177812  370051 cri.go:89] found id: ""
	I0229 02:34:56.177843  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.177854  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:56.177875  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:56.177894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:56.224294  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:56.224336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:56.275874  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:56.275909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:56.291172  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:56.291202  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:56.364839  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:56.364870  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:56.364888  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:58.950871  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:58.966327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:58.966389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:59.005914  370051 cri.go:89] found id: ""
	I0229 02:34:59.005952  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.005968  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:59.005976  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:59.006045  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:59.043962  370051 cri.go:89] found id: ""
	I0229 02:34:59.043993  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.044005  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:59.044013  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:59.044167  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:59.089398  370051 cri.go:89] found id: ""
	I0229 02:34:59.089426  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.089434  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:59.089440  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:59.089491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:59.130786  370051 cri.go:89] found id: ""
	I0229 02:34:59.130815  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.130824  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:59.130830  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:59.130909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:59.174807  370051 cri.go:89] found id: ""
	I0229 02:34:59.174836  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.174848  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:59.174855  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:59.174929  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:59.217745  370051 cri.go:89] found id: ""
	I0229 02:34:59.217792  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.217800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:59.217806  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:59.217858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:59.260906  370051 cri.go:89] found id: ""
	I0229 02:34:59.260939  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.260950  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:59.260957  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:59.261025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:59.299114  370051 cri.go:89] found id: ""
	I0229 02:34:59.299140  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.299150  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:59.299161  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:59.299173  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:59.349630  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:59.349672  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:59.365679  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:59.365710  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:59.438234  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:59.438261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:59.438280  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:59.524185  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:59.524219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:58.991975  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:01.489719  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:03.490315  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.087731  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:04.585197  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.008802  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:04.509210  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.068320  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:02.082910  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:02.082988  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:02.122095  370051 cri.go:89] found id: ""
	I0229 02:35:02.122132  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.122145  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:02.122153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:02.122245  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:02.160982  370051 cri.go:89] found id: ""
	I0229 02:35:02.161013  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.161029  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:02.161043  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:02.161108  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:02.200603  370051 cri.go:89] found id: ""
	I0229 02:35:02.200637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.200650  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:02.200658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:02.200746  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:02.243100  370051 cri.go:89] found id: ""
	I0229 02:35:02.243126  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.243134  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:02.243140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:02.243207  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:02.282758  370051 cri.go:89] found id: ""
	I0229 02:35:02.282793  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.282806  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:02.282815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:02.282884  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:02.324402  370051 cri.go:89] found id: ""
	I0229 02:35:02.324434  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.324444  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:02.324455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:02.324520  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:02.368608  370051 cri.go:89] found id: ""
	I0229 02:35:02.368637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.368650  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:02.368658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:02.368726  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:02.411449  370051 cri.go:89] found id: ""
	I0229 02:35:02.411484  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.411497  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:02.411509  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:02.411526  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:02.427942  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:02.427974  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:02.498848  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:02.498884  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:02.498902  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:02.585701  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:02.585749  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:02.642055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:02.642096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.201769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:05.215944  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:05.216020  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:05.254080  370051 cri.go:89] found id: ""
	I0229 02:35:05.254107  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.254121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:05.254128  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:05.254179  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:05.296990  370051 cri.go:89] found id: ""
	I0229 02:35:05.297022  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.297034  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:05.297042  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:05.297111  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:05.336241  370051 cri.go:89] found id: ""
	I0229 02:35:05.336275  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.336290  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:05.336299  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:05.336395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:05.377620  370051 cri.go:89] found id: ""
	I0229 02:35:05.377649  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.377658  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:05.377664  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:05.377712  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:05.416275  370051 cri.go:89] found id: ""
	I0229 02:35:05.416303  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.416311  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:05.416318  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:05.416373  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:05.455375  370051 cri.go:89] found id: ""
	I0229 02:35:05.455412  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.455426  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:05.455436  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:05.455507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:05.495862  370051 cri.go:89] found id: ""
	I0229 02:35:05.495887  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.495897  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:05.495905  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:05.495969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:05.541218  370051 cri.go:89] found id: ""
	I0229 02:35:05.541247  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.541260  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:05.541273  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:05.541288  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:05.629982  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:05.630023  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:05.719026  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:05.719066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.785318  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:05.785359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:05.801181  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:05.801214  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:05.871333  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:05.490857  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:07.991044  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:06.587458  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:09.086313  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:06.510265  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:08.510391  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:08.371982  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:08.386451  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:08.386514  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:08.430045  370051 cri.go:89] found id: ""
	I0229 02:35:08.430077  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.430090  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:08.430099  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:08.430169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:08.470547  370051 cri.go:89] found id: ""
	I0229 02:35:08.470583  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.470596  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:08.470604  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:08.470671  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:08.512637  370051 cri.go:89] found id: ""
	I0229 02:35:08.512676  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.512687  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:08.512695  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:08.512759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:08.556228  370051 cri.go:89] found id: ""
	I0229 02:35:08.556263  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.556271  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:08.556277  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:08.556335  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:08.613838  370051 cri.go:89] found id: ""
	I0229 02:35:08.613868  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.613878  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:08.613884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:08.613940  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:08.686408  370051 cri.go:89] found id: ""
	I0229 02:35:08.686442  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.686454  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:08.686462  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:08.686519  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:08.725665  370051 cri.go:89] found id: ""
	I0229 02:35:08.725697  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.725710  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:08.725719  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:08.725776  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:08.765639  370051 cri.go:89] found id: ""
	I0229 02:35:08.765666  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.765674  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:08.765684  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:08.765695  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:08.813097  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:08.813135  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:08.828880  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:08.828909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:08.903237  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:08.903261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:08.903281  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:08.991710  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:08.991745  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:10.491022  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:12.491159  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.086828  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:13.586274  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.009650  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:13.011571  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.536724  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:11.551614  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:11.551690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:11.593078  370051 cri.go:89] found id: ""
	I0229 02:35:11.593110  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.593121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:11.593129  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:11.593185  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:11.645696  370051 cri.go:89] found id: ""
	I0229 02:35:11.645729  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.645742  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:11.645751  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:11.645820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:11.691181  370051 cri.go:89] found id: ""
	I0229 02:35:11.691213  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.691226  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:11.691245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:11.691318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:11.745906  370051 cri.go:89] found id: ""
	I0229 02:35:11.745933  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.745946  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:11.745953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:11.746019  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:11.784895  370051 cri.go:89] found id: ""
	I0229 02:35:11.784927  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.784940  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:11.784949  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:11.785025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:11.825341  370051 cri.go:89] found id: ""
	I0229 02:35:11.825372  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.825384  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:11.825392  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:11.825464  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:11.862454  370051 cri.go:89] found id: ""
	I0229 02:35:11.862492  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.862505  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:11.862523  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:11.862604  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:11.908424  370051 cri.go:89] found id: ""
	I0229 02:35:11.908450  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.908459  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:11.908469  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:11.908487  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:11.956274  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:11.956313  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:11.972363  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:11.972397  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:12.052030  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:12.052057  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:12.052078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:12.138388  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:12.138431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:14.691474  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:14.724652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:14.724739  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:14.765210  370051 cri.go:89] found id: ""
	I0229 02:35:14.765237  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.765246  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:14.765253  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:14.765306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:14.808226  370051 cri.go:89] found id: ""
	I0229 02:35:14.808258  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.808270  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:14.808287  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:14.808357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:14.847999  370051 cri.go:89] found id: ""
	I0229 02:35:14.848030  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.848041  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:14.848049  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:14.848123  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:14.887221  370051 cri.go:89] found id: ""
	I0229 02:35:14.887248  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.887256  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:14.887263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:14.887339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:14.929905  370051 cri.go:89] found id: ""
	I0229 02:35:14.929933  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.929950  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:14.929956  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:14.930011  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:14.969697  370051 cri.go:89] found id: ""
	I0229 02:35:14.969739  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.969761  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:14.969770  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:14.969837  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:15.013387  370051 cri.go:89] found id: ""
	I0229 02:35:15.013418  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.013429  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:15.013437  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:15.013493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:15.058199  370051 cri.go:89] found id: ""
	I0229 02:35:15.058240  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.058253  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:15.058270  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:15.058287  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:15.110165  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:15.110213  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:15.127417  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:15.127452  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:15.203330  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:15.203370  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:15.203405  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:15.283455  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:15.283501  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:14.991352  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.490127  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:15.586556  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:18.085962  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:15.509530  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.512518  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:20.009873  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.829187  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:17.844678  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:17.844759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:17.885549  370051 cri.go:89] found id: ""
	I0229 02:35:17.885581  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.885594  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:17.885601  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:17.885670  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:17.925652  370051 cri.go:89] found id: ""
	I0229 02:35:17.925679  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.925691  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:17.925699  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:17.925766  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:17.963172  370051 cri.go:89] found id: ""
	I0229 02:35:17.963203  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.963215  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:17.963224  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:17.963282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:18.003528  370051 cri.go:89] found id: ""
	I0229 02:35:18.003560  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.003572  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:18.003579  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:18.003644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:18.046494  370051 cri.go:89] found id: ""
	I0229 02:35:18.046526  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.046537  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:18.046545  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:18.046613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:18.084963  370051 cri.go:89] found id: ""
	I0229 02:35:18.084993  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.085004  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:18.085013  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:18.085074  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:18.125521  370051 cri.go:89] found id: ""
	I0229 02:35:18.125547  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.125556  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:18.125563  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:18.125623  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:18.169963  370051 cri.go:89] found id: ""
	I0229 02:35:18.169995  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.170006  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:18.170020  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:18.170035  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:18.225414  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:18.225460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:18.242069  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:18.242108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:18.312704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:18.312728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:18.312742  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:18.397206  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:18.397249  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:20.968000  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:20.983115  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:20.983196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:21.025710  370051 cri.go:89] found id: ""
	I0229 02:35:21.025735  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.025743  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:21.025749  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:21.025812  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:21.065825  370051 cri.go:89] found id: ""
	I0229 02:35:21.065854  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.065862  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:21.065868  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:21.065928  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:21.104738  370051 cri.go:89] found id: ""
	I0229 02:35:21.104770  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.104782  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:21.104790  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:21.104871  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:19.990622  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.491026  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.491059  369591 pod_ready.go:81] duration metric: took 4m0.008454624s waiting for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	E0229 02:35:22.491069  369591 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:35:22.491077  369591 pod_ready.go:38] duration metric: took 4m5.576507129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:22.491094  369591 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:35:22.491124  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:22.491174  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:22.562384  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:22.562412  369591 cri.go:89] found id: ""
	I0229 02:35:22.562422  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:22.562487  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.567997  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:22.568073  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:22.632786  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:22.632811  369591 cri.go:89] found id: ""
	I0229 02:35:22.632822  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:22.632887  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.637899  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:22.637975  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:22.681988  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:22.682014  369591 cri.go:89] found id: ""
	I0229 02:35:22.682024  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:22.682084  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.687515  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:22.687606  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:22.732907  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:22.732931  369591 cri.go:89] found id: ""
	I0229 02:35:22.732939  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:22.732995  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.737695  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:22.737758  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:22.779316  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:22.779341  369591 cri.go:89] found id: ""
	I0229 02:35:22.779349  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:22.779413  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.786533  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:22.786617  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:22.834391  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:22.834420  369591 cri.go:89] found id: ""
	I0229 02:35:22.834430  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:22.834500  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.839386  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:22.839458  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:22.881275  369591 cri.go:89] found id: ""
	I0229 02:35:22.881304  369591 logs.go:276] 0 containers: []
	W0229 02:35:22.881317  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:22.881326  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:22.881404  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:22.932822  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:22.932846  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:22.932850  369591 cri.go:89] found id: ""
	I0229 02:35:22.932858  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:22.932913  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.938541  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.943263  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:22.943288  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:22.994089  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:22.994122  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:23.051780  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:23.051821  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:23.099220  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:23.099251  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:23.157383  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:23.157429  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:23.206125  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:23.206180  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:23.261950  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:23.261982  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:23.324394  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:23.324427  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:23.400608  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:23.400648  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:20.589079  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:23.088469  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.510074  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:24.002388  369869 pod_ready.go:81] duration metric: took 4m0.000212386s waiting for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" ...
	E0229 02:35:24.002420  369869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:35:24.002439  369869 pod_ready.go:38] duration metric: took 4m6.701505951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:24.002490  369869 kubeadm.go:640] restartCluster took 4m24.423602043s
	W0229 02:35:24.002593  369869 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:35:24.002621  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:35:21.147180  370051 cri.go:89] found id: ""
	I0229 02:35:21.147211  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.147221  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:21.147228  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:21.147284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:21.187240  370051 cri.go:89] found id: ""
	I0229 02:35:21.187275  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.187287  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:21.187295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:21.187389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:21.228873  370051 cri.go:89] found id: ""
	I0229 02:35:21.228899  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.228917  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:21.228924  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:21.228992  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:21.268827  370051 cri.go:89] found id: ""
	I0229 02:35:21.268856  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.268867  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:21.268876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:21.268970  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:21.313253  370051 cri.go:89] found id: ""
	I0229 02:35:21.313288  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.313297  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:21.313307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:21.313328  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:21.448089  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:21.448120  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:21.448146  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:21.539941  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:21.539983  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:21.590148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:21.590186  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:21.647760  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:21.647797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:24.165842  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:24.183263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:24.183345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:24.233173  370051 cri.go:89] found id: ""
	I0229 02:35:24.233208  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.233219  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:24.233228  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:24.233301  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:24.276937  370051 cri.go:89] found id: ""
	I0229 02:35:24.276977  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.276989  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:24.276998  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:24.277066  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:24.314629  370051 cri.go:89] found id: ""
	I0229 02:35:24.314665  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.314678  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:24.314686  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:24.314753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:24.367585  370051 cri.go:89] found id: ""
	I0229 02:35:24.367618  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.367630  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:24.367639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:24.367709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:24.451128  370051 cri.go:89] found id: ""
	I0229 02:35:24.451151  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.451160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:24.451167  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:24.451258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:24.497302  370051 cri.go:89] found id: ""
	I0229 02:35:24.497336  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.497348  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:24.497357  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:24.497431  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:24.544593  370051 cri.go:89] found id: ""
	I0229 02:35:24.544621  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.544632  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:24.544640  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:24.544714  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:24.584570  370051 cri.go:89] found id: ""
	I0229 02:35:24.584601  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.584613  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:24.584626  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:24.584645  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:24.669019  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:24.669044  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:24.669061  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:24.752163  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:24.752205  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:24.811945  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:24.811985  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:24.874832  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:24.874873  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:23.928222  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:23.928275  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:23.983171  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:23.983216  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:23.999343  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:23.999382  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:24.180422  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:24.180476  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:26.745283  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:26.768785  369591 api_server.go:72] duration metric: took 4m17.549714658s to wait for apiserver process to appear ...
	I0229 02:35:26.768823  369591 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:35:26.768885  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:26.768949  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:26.816275  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:26.816303  369591 cri.go:89] found id: ""
	I0229 02:35:26.816314  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:26.816379  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.820985  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:26.821062  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:26.870520  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:26.870545  369591 cri.go:89] found id: ""
	I0229 02:35:26.870555  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:26.870613  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.875785  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:26.875869  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:26.926844  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:26.926884  369591 cri.go:89] found id: ""
	I0229 02:35:26.926895  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:26.926963  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.933667  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:26.933747  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:26.988547  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:26.988575  369591 cri.go:89] found id: ""
	I0229 02:35:26.988584  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:26.988645  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.994520  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:26.994600  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:27.040568  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:27.040602  369591 cri.go:89] found id: ""
	I0229 02:35:27.040612  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:27.040679  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.046103  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:27.046161  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:27.094322  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:27.094345  369591 cri.go:89] found id: ""
	I0229 02:35:27.094357  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:27.094428  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.101702  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:27.101779  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:27.164549  369591 cri.go:89] found id: ""
	I0229 02:35:27.164584  369591 logs.go:276] 0 containers: []
	W0229 02:35:27.164596  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:27.164604  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:27.164674  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:27.219403  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:27.219431  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:27.219436  369591 cri.go:89] found id: ""
	I0229 02:35:27.219447  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:27.219510  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.226705  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.233551  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:27.233576  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:27.281111  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:27.281152  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:27.333686  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:27.333738  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:27.948683  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:27.948736  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:28.018866  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:28.018917  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:28.164820  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:28.164857  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:28.222926  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:28.222963  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:28.265708  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:28.265738  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:28.309311  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:28.309352  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:28.363295  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:28.363341  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:28.384099  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:28.384146  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:28.451988  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:28.452025  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:28.499748  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:28.499783  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:25.586753  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:27.589329  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:27.392846  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:27.419255  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:27.419339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:27.465294  370051 cri.go:89] found id: ""
	I0229 02:35:27.465325  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.465337  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:27.465345  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:27.465417  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:27.533393  370051 cri.go:89] found id: ""
	I0229 02:35:27.533424  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.533433  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:27.533441  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:27.533510  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:27.587195  370051 cri.go:89] found id: ""
	I0229 02:35:27.587221  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.587232  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:27.587240  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:27.587313  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:27.638597  370051 cri.go:89] found id: ""
	I0229 02:35:27.638624  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.638632  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:27.638639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:27.638709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:27.687695  370051 cri.go:89] found id: ""
	I0229 02:35:27.687730  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.687742  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:27.687750  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:27.687825  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:27.732275  370051 cri.go:89] found id: ""
	I0229 02:35:27.732309  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.732320  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:27.732327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:27.732389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:27.783069  370051 cri.go:89] found id: ""
	I0229 02:35:27.783109  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.783122  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:27.783133  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:27.783224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:27.832385  370051 cri.go:89] found id: ""
	I0229 02:35:27.832416  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.832429  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:27.832443  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:27.832460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:27.902610  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:27.902658  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:27.919900  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:27.919947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:28.003313  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:28.003337  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:28.003356  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:28.100814  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:28.100853  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:30.654289  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:30.683056  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:30.683141  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:30.734678  370051 cri.go:89] found id: ""
	I0229 02:35:30.734704  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.734712  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:30.734719  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:30.734771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:30.780792  370051 cri.go:89] found id: ""
	I0229 02:35:30.780821  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.780830  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:30.780837  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:30.780904  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:30.827244  370051 cri.go:89] found id: ""
	I0229 02:35:30.827269  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.827278  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:30.827285  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:30.827336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:30.871305  370051 cri.go:89] found id: ""
	I0229 02:35:30.871333  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.871342  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:30.871348  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:30.871423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:30.910095  370051 cri.go:89] found id: ""
	I0229 02:35:30.910121  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.910130  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:30.910136  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:30.910188  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:30.955234  370051 cri.go:89] found id: ""
	I0229 02:35:30.955261  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.955271  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:30.955278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:30.955345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:30.996555  370051 cri.go:89] found id: ""
	I0229 02:35:30.996589  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.996602  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:30.996611  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:30.996687  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:31.036424  370051 cri.go:89] found id: ""
	I0229 02:35:31.036454  370051 logs.go:276] 0 containers: []
	W0229 02:35:31.036464  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:31.036474  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:31.036488  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:31.107928  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:31.107987  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:31.125268  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:31.125303  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:31.053142  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:35:31.060477  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0229 02:35:31.062106  369591 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:35:31.062143  369591 api_server.go:131] duration metric: took 4.2933111s to wait for apiserver health ...
	I0229 02:35:31.062154  369591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:35:31.062189  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:31.062278  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:31.119877  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:31.119905  369591 cri.go:89] found id: ""
	I0229 02:35:31.119915  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:31.119981  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.125569  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:31.125648  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:31.193662  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:31.193693  369591 cri.go:89] found id: ""
	I0229 02:35:31.193704  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:31.193762  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.199267  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:31.199365  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:31.251832  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:31.251862  369591 cri.go:89] found id: ""
	I0229 02:35:31.251873  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:31.251935  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.258374  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:31.258477  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:31.309718  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:31.309745  369591 cri.go:89] found id: ""
	I0229 02:35:31.309753  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:31.309804  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.314949  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:31.315025  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:31.367936  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:31.367960  369591 cri.go:89] found id: ""
	I0229 02:35:31.367970  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:31.368038  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.373072  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:31.373137  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:31.420362  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:31.420390  369591 cri.go:89] found id: ""
	I0229 02:35:31.420402  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:31.420470  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.427151  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:31.427221  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:31.482289  369591 cri.go:89] found id: ""
	I0229 02:35:31.482321  369591 logs.go:276] 0 containers: []
	W0229 02:35:31.482333  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:31.482342  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:31.482405  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:31.526713  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:31.526738  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:31.526744  369591 cri.go:89] found id: ""
	I0229 02:35:31.526755  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:31.526807  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.531874  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.536727  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:31.536758  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:31.555901  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:31.555943  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:31.689587  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:31.689629  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:31.737625  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:31.737669  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:31.781015  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:31.781050  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:31.824727  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:31.824757  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:31.866867  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:31.866897  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:31.920324  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:31.920375  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:31.962783  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:31.962815  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:32.003525  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:32.003557  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:32.061377  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:32.061417  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:32.454041  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:32.454097  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:32.498969  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:32.499006  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:30.086688  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:32.087795  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:34.585435  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:35.060469  369591 system_pods.go:59] 8 kube-system pods found
	I0229 02:35:35.060503  369591 system_pods.go:61] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running
	I0229 02:35:35.060509  369591 system_pods.go:61] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running
	I0229 02:35:35.060516  369591 system_pods.go:61] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running
	I0229 02:35:35.060521  369591 system_pods.go:61] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running
	I0229 02:35:35.060525  369591 system_pods.go:61] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running
	I0229 02:35:35.060530  369591 system_pods.go:61] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running
	I0229 02:35:35.060538  369591 system_pods.go:61] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:35:35.060543  369591 system_pods.go:61] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running
	I0229 02:35:35.060553  369591 system_pods.go:74] duration metric: took 3.99838967s to wait for pod list to return data ...
	I0229 02:35:35.060563  369591 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:35:35.063638  369591 default_sa.go:45] found service account: "default"
	I0229 02:35:35.063665  369591 default_sa.go:55] duration metric: took 3.094531ms for default service account to be created ...
	I0229 02:35:35.063676  369591 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:35:35.071344  369591 system_pods.go:86] 8 kube-system pods found
	I0229 02:35:35.071366  369591 system_pods.go:89] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running
	I0229 02:35:35.071371  369591 system_pods.go:89] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running
	I0229 02:35:35.071375  369591 system_pods.go:89] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running
	I0229 02:35:35.071380  369591 system_pods.go:89] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running
	I0229 02:35:35.071385  369591 system_pods.go:89] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running
	I0229 02:35:35.071389  369591 system_pods.go:89] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running
	I0229 02:35:35.071397  369591 system_pods.go:89] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:35:35.071408  369591 system_pods.go:89] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running
	I0229 02:35:35.071420  369591 system_pods.go:126] duration metric: took 7.737446ms to wait for k8s-apps to be running ...
	I0229 02:35:35.071433  369591 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:35:35.071482  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:35.091472  369591 system_svc.go:56] duration metric: took 20.031453ms WaitForService to wait for kubelet.
	I0229 02:35:35.091504  369591 kubeadm.go:581] duration metric: took 4m25.872454283s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:35:35.091523  369591 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:35:35.095487  369591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:35:35.095509  369591 node_conditions.go:123] node cpu capacity is 2
	I0229 02:35:35.095546  369591 node_conditions.go:105] duration metric: took 4.018229ms to run NodePressure ...
	I0229 02:35:35.095567  369591 start.go:228] waiting for startup goroutines ...
	I0229 02:35:35.095580  369591 start.go:233] waiting for cluster config update ...
	I0229 02:35:35.095594  369591 start.go:242] writing updated cluster config ...
	I0229 02:35:35.095888  369591 ssh_runner.go:195] Run: rm -f paused
	I0229 02:35:35.154197  369591 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 02:35:35.156089  369591 out.go:177] * Done! kubectl is now configured to use "no-preload-247751" cluster and "default" namespace by default
	W0229 02:35:31.217691  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:31.217717  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:31.217740  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:31.313847  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:31.313883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:33.861648  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:33.876887  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:33.876954  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:33.921545  370051 cri.go:89] found id: ""
	I0229 02:35:33.921577  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.921588  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:33.921597  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:33.921658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:33.972558  370051 cri.go:89] found id: ""
	I0229 02:35:33.972584  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.972592  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:33.972599  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:33.972662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:34.020821  370051 cri.go:89] found id: ""
	I0229 02:35:34.020852  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.020862  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:34.020873  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:34.020937  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:34.064076  370051 cri.go:89] found id: ""
	I0229 02:35:34.064110  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.064121  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:34.064129  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:34.064191  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:34.108523  370051 cri.go:89] found id: ""
	I0229 02:35:34.108557  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.108568  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:34.108576  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:34.108639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:34.149444  370051 cri.go:89] found id: ""
	I0229 02:35:34.149468  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.149478  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:34.149487  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:34.149562  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:34.193780  370051 cri.go:89] found id: ""
	I0229 02:35:34.193805  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.193814  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:34.193820  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:34.193913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:34.237088  370051 cri.go:89] found id: ""
	I0229 02:35:34.237118  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.237127  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:34.237137  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:34.237151  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:34.281055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:34.281091  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:34.333886  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:34.333925  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:34.353163  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:34.353204  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:34.465925  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:34.465951  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:34.465969  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:36.587119  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:39.086456  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:37.049957  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:37.064297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:37.064384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:37.105669  370051 cri.go:89] found id: ""
	I0229 02:35:37.105703  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.105711  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:37.105720  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:37.105790  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:37.143753  370051 cri.go:89] found id: ""
	I0229 02:35:37.143788  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.143799  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:37.143808  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:37.143880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:37.180126  370051 cri.go:89] found id: ""
	I0229 02:35:37.180157  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.180166  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:37.180173  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:37.180227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:37.221135  370051 cri.go:89] found id: ""
	I0229 02:35:37.221173  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.221185  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:37.221193  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:37.221261  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:37.258888  370051 cri.go:89] found id: ""
	I0229 02:35:37.258920  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.258932  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:37.258940  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:37.259005  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:37.300970  370051 cri.go:89] found id: ""
	I0229 02:35:37.300998  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.301010  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:37.301018  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:37.301105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:37.349797  370051 cri.go:89] found id: ""
	I0229 02:35:37.349829  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.349841  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:37.349850  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:37.349916  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:37.408726  370051 cri.go:89] found id: ""
	I0229 02:35:37.408762  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.408773  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:37.408787  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:37.408805  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:37.462030  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:37.462064  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:37.477836  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:37.477868  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:37.553886  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:37.553924  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:37.553941  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:37.644637  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:37.644683  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:40.197937  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:40.212830  370051 kubeadm.go:640] restartCluster took 4m14.648338345s
	W0229 02:35:40.212984  370051 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 02:35:40.213021  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:35:40.673169  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:40.690108  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:35:40.702424  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:35:40.713782  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:35:40.713832  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:35:40.775345  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:35:40.775527  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:35:40.929045  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:35:40.929185  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:35:40.929310  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:35:41.154311  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:35:41.154449  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:35:41.162905  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:35:41.317651  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:35:41.319260  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:35:41.319358  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:35:41.319458  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:35:41.319564  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:35:41.319675  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:35:41.319772  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:35:41.319857  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:35:41.319963  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:35:41.320066  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:35:41.320166  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:35:41.320289  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:35:41.320357  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:35:41.320439  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:35:41.457291  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:35:41.599703  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:35:41.766344  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:35:41.939397  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:35:41.940740  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:35:41.090698  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:43.585822  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:41.942544  370051 out.go:204]   - Booting up control plane ...
	I0229 02:35:41.942656  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:35:41.946949  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:35:41.949540  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:35:41.950426  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:35:41.953310  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:35:45.586855  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:48.085961  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:50.585602  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:52.587992  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:55.085046  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:57.086710  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:59.590441  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:57.264698  369869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.262039409s)
	I0229 02:35:57.264826  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:57.285615  369869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:35:57.297607  369869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:35:57.309412  369869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:35:57.309471  369869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:35:57.540175  369869 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:36:02.086317  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:04.587625  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:06.714158  369869 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:36:06.714249  369869 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:36:06.714325  369869 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:36:06.714490  369869 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:36:06.714633  369869 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:36:06.714742  369869 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:36:06.716059  369869 out.go:204]   - Generating certificates and keys ...
	I0229 02:36:06.716160  369869 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:36:06.716250  369869 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:36:06.716357  369869 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:36:06.716434  369869 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:36:06.716508  369869 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:36:06.716572  369869 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:36:06.716649  369869 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:36:06.716722  369869 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:36:06.716824  369869 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:36:06.716952  369869 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:36:06.717008  369869 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:36:06.717080  369869 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:36:06.717147  369869 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:36:06.717221  369869 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:36:06.717298  369869 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:36:06.717367  369869 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:36:06.717474  369869 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:36:06.717559  369869 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:36:06.718770  369869 out.go:204]   - Booting up control plane ...
	I0229 02:36:06.718866  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:36:06.718983  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:36:06.719074  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:36:06.719230  369869 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:36:06.719364  369869 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:36:06.719431  369869 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:36:06.719628  369869 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:36:06.719749  369869 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503520 seconds
	I0229 02:36:06.719906  369869 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:36:06.720060  369869 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:36:06.720126  369869 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:36:06.720344  369869 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-071485 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:36:06.720433  369869 kubeadm.go:322] [bootstrap-token] Using token: oueq3v.8ghuyl6sece1tffl
	I0229 02:36:06.721973  369869 out.go:204]   - Configuring RBAC rules ...
	I0229 02:36:06.722107  369869 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:36:06.722252  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:36:06.722444  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:36:06.722643  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:36:06.722793  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:36:06.722937  369869 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:36:06.723081  369869 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:36:06.723119  369869 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:36:06.723188  369869 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:36:06.723198  369869 kubeadm.go:322] 
	I0229 02:36:06.723285  369869 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:36:06.723310  369869 kubeadm.go:322] 
	I0229 02:36:06.723426  369869 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:36:06.723436  369869 kubeadm.go:322] 
	I0229 02:36:06.723467  369869 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:36:06.723556  369869 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:36:06.723637  369869 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:36:06.723646  369869 kubeadm.go:322] 
	I0229 02:36:06.723713  369869 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:36:06.723722  369869 kubeadm.go:322] 
	I0229 02:36:06.723799  369869 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:36:06.723809  369869 kubeadm.go:322] 
	I0229 02:36:06.723869  369869 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:36:06.723979  369869 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:36:06.724073  369869 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:36:06.724083  369869 kubeadm.go:322] 
	I0229 02:36:06.724178  369869 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:36:06.724269  369869 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:36:06.724279  369869 kubeadm.go:322] 
	I0229 02:36:06.724389  369869 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token oueq3v.8ghuyl6sece1tffl \
	I0229 02:36:06.724520  369869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 \
	I0229 02:36:06.724552  369869 kubeadm.go:322] 	--control-plane 
	I0229 02:36:06.724560  369869 kubeadm.go:322] 
	I0229 02:36:06.724665  369869 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:36:06.724675  369869 kubeadm.go:322] 
	I0229 02:36:06.724767  369869 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token oueq3v.8ghuyl6sece1tffl \
	I0229 02:36:06.724923  369869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 
	I0229 02:36:06.724941  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:36:06.724952  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:36:06.726566  369869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:36:07.088398  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:09.587442  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:06.727880  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:36:06.786343  369869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:36:06.842349  369869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:36:06.842420  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=default-k8s-diff-port-071485 minikube.k8s.io/updated_at=2024_02_29T02_36_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:06.842428  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:07.196763  369869 ops.go:34] apiserver oom_adj: -16
	I0229 02:36:07.196958  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:07.696991  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:08.197336  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:08.697155  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:09.197955  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:09.697107  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:10.197816  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.085528  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:14.085852  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:10.697486  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:11.197744  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:11.697179  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.197614  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.697015  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:13.197983  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:13.697315  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:14.196982  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:14.698012  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:15.197896  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:15.697895  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:16.197062  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:16.697819  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:17.197222  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:17.697031  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.197683  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.697094  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.870924  369869 kubeadm.go:1088] duration metric: took 12.028572011s to wait for elevateKubeSystemPrivileges.
	I0229 02:36:18.870961  369869 kubeadm.go:406] StartCluster complete in 5m19.353203226s
	I0229 02:36:18.870986  369869 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:36:18.871077  369869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:36:18.873654  369869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:36:18.873954  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:36:18.874041  369869 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:36:18.874118  369869 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874130  369869 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874142  369869 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.874149  369869 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:36:18.874152  369869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-071485"
	I0229 02:36:18.874201  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.874256  369869 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:36:18.874341  369869 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874359  369869 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.874367  369869 addons.go:243] addon metrics-server should already be in state true
	I0229 02:36:18.874422  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.874613  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874637  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.874613  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874691  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.874811  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874846  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.892207  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0229 02:36:18.892260  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0229 02:36:18.892967  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.892986  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.893508  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.893528  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.893680  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.893700  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.893936  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.894102  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.894143  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0229 02:36:18.894331  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.894582  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.894594  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.894613  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.895109  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.895143  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.895508  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.896106  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.896142  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.898127  369869 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.898143  369869 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:36:18.898168  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.898482  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.898516  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.917303  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37069
	I0229 02:36:18.917472  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I0229 02:36:18.917747  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.917894  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.918493  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.918510  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.918654  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.918665  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.919012  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.919077  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.919229  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.919754  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.921030  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.922677  369869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:36:18.921622  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.923872  369869 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:36:18.923899  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:36:18.923919  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.925237  369869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:36:18.926153  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:36:18.924603  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0229 02:36:18.926269  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:36:18.926303  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.927739  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.928184  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.928277  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.928299  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.930032  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.930057  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.930386  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.930456  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.930614  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.930723  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.930914  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.931014  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:18.931133  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.931185  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.931533  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.931553  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.931576  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.931737  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.932033  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.932190  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:18.948311  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0229 02:36:18.949328  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.949793  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.949819  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.950313  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.950529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.952381  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.952660  369869 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:36:18.952673  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:36:18.952689  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.956332  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.956779  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.956808  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.957117  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.957313  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.957425  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.957485  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:19.128114  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:36:19.141619  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:36:19.141649  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:36:19.169945  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:36:19.187099  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:36:19.187124  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:36:19.211358  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:36:19.289856  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:36:19.289880  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:36:19.398720  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:36:19.414512  369869 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-071485" context rescaled to 1 replicas
	I0229 02:36:19.414562  369869 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.233 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:36:19.416389  369869 out.go:177] * Verifying Kubernetes components...
	I0229 02:36:15.586606  369508 pod_ready.go:81] duration metric: took 4m0.008250092s waiting for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	E0229 02:36:15.586638  369508 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:36:15.586648  369508 pod_ready.go:38] duration metric: took 4m5.573018241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:36:15.586669  369508 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:36:15.586707  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:15.586771  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:15.644937  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:15.644969  369508 cri.go:89] found id: ""
	I0229 02:36:15.644980  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:15.645054  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.653058  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:15.653137  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:15.709225  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:15.709254  369508 cri.go:89] found id: ""
	I0229 02:36:15.709264  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:15.709333  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.715304  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:15.715391  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:15.769593  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:15.769627  369508 cri.go:89] found id: ""
	I0229 02:36:15.769637  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:15.769702  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.775157  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:15.775230  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:15.820002  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:15.820030  369508 cri.go:89] found id: ""
	I0229 02:36:15.820040  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:15.820105  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.827058  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:15.827122  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:15.875030  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:15.875063  369508 cri.go:89] found id: ""
	I0229 02:36:15.875074  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:15.875142  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.880489  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:15.880555  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:15.929452  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:15.929476  369508 cri.go:89] found id: ""
	I0229 02:36:15.929484  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:15.929545  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.934321  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:15.934396  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:15.981960  369508 cri.go:89] found id: ""
	I0229 02:36:15.981997  369508 logs.go:276] 0 containers: []
	W0229 02:36:15.982006  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:15.982014  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:15.982077  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:16.034169  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:16.034196  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:16.034201  369508 cri.go:89] found id: ""
	I0229 02:36:16.034210  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:16.034281  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:16.039463  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:16.044719  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:16.044748  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:16.111048  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:16.111084  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:16.278784  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:16.278832  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:16.333048  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:16.333085  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:16.376514  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:16.376555  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:16.420840  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:16.420944  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:16.468273  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:16.468308  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:16.526001  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:16.526043  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:16.569084  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:16.569120  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:16.609818  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:16.609847  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:16.660979  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:16.661019  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:16.677397  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:16.677432  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:16.732421  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:16.732464  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:19.417788  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:36:21.277741  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.107753576s)
	I0229 02:36:21.277802  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.277815  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.277840  369869 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.066425449s)
	I0229 02:36:21.277873  369869 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 02:36:21.277840  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.149690589s)
	I0229 02:36:21.277908  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.277918  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278277  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.278323  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278331  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.278339  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.278351  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278445  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278458  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.278465  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.278474  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278519  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.278592  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278603  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.280452  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.280470  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.280482  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.300880  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.300907  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.301193  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.301217  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.572633  369869 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.154816183s)
	I0229 02:36:21.572676  369869 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-071485" to be "Ready" ...
	I0229 02:36:21.572635  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.173852857s)
	I0229 02:36:21.572814  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.572842  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.573153  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.573207  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.573215  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.573228  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.573236  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.573538  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.573575  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.573587  369869 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-071485"
	I0229 02:36:21.575111  369869 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:36:19.738493  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:36:19.758171  369508 api_server.go:72] duration metric: took 4m17.008228834s to wait for apiserver process to appear ...
	I0229 02:36:19.758199  369508 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:36:19.758281  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:19.758349  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:19.811042  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:19.811071  369508 cri.go:89] found id: ""
	I0229 02:36:19.811082  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:19.811145  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.817952  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:19.818034  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:19.871006  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:19.871033  369508 cri.go:89] found id: ""
	I0229 02:36:19.871043  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:19.871109  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.877440  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:19.877512  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:19.928043  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:19.928071  369508 cri.go:89] found id: ""
	I0229 02:36:19.928081  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:19.928142  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.935299  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:19.935363  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:19.977360  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:19.977391  369508 cri.go:89] found id: ""
	I0229 02:36:19.977402  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:19.977482  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.982361  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:19.982442  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:20.025903  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:20.025931  369508 cri.go:89] found id: ""
	I0229 02:36:20.025941  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:20.026012  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.031390  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:20.031477  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:20.080768  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:20.080792  369508 cri.go:89] found id: ""
	I0229 02:36:20.080800  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:20.080864  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.087322  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:20.087388  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:20.139067  369508 cri.go:89] found id: ""
	I0229 02:36:20.139111  369508 logs.go:276] 0 containers: []
	W0229 02:36:20.139124  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:20.139132  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:20.139195  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:20.193052  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:20.193085  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:20.193091  369508 cri.go:89] found id: ""
	I0229 02:36:20.193101  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:20.193174  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.199740  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.205385  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:20.205414  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:20.360843  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:20.360894  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:20.411077  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:20.411113  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:20.459855  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:20.459910  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:20.517056  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:20.517101  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:20.568151  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:20.568185  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:20.637131  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:20.637165  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:21.144933  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:21.144980  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:21.206565  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:21.206607  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:21.257071  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:21.257118  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:21.315541  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:21.315589  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:21.358630  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:21.358665  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:21.398170  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:21.398201  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:23.914059  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:36:23.923854  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0229 02:36:23.926443  369508 api_server.go:141] control plane version: v1.28.4
	I0229 02:36:23.926466  369508 api_server.go:131] duration metric: took 4.168260413s to wait for apiserver health ...
	I0229 02:36:23.926475  369508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:36:23.926506  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:23.926566  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:24.013825  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:24.013849  369508 cri.go:89] found id: ""
	I0229 02:36:24.013857  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:24.013913  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.019432  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:24.019506  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:24.078857  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:24.078877  369508 cri.go:89] found id: ""
	I0229 02:36:24.078885  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:24.078945  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.083761  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:24.083822  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:24.133681  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:24.133707  369508 cri.go:89] found id: ""
	I0229 02:36:24.133717  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:24.133779  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.139165  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:24.139228  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:24.185863  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:24.185883  369508 cri.go:89] found id: ""
	I0229 02:36:24.185892  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:24.185939  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.191094  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:24.191164  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:24.232922  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:24.232953  369508 cri.go:89] found id: ""
	I0229 02:36:24.232963  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:24.233031  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.238154  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:24.238252  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:24.280735  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:24.280760  369508 cri.go:89] found id: ""
	I0229 02:36:24.280769  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:24.280842  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.285497  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:24.285558  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:24.324979  369508 cri.go:89] found id: ""
	I0229 02:36:24.325007  369508 logs.go:276] 0 containers: []
	W0229 02:36:24.325016  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:24.325022  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:24.325085  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:24.370875  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:24.370908  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:24.370912  369508 cri.go:89] found id: ""
	I0229 02:36:24.370919  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:24.370973  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.378247  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.382856  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:24.382899  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:24.430889  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:24.430919  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:24.470370  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:24.470407  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:21.576300  369869 addons.go:505] enable addons completed in 2.702258052s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:36:21.582468  369869 node_ready.go:49] node "default-k8s-diff-port-071485" has status "Ready":"True"
	I0229 02:36:21.582494  369869 node_ready.go:38] duration metric: took 9.804213ms waiting for node "default-k8s-diff-port-071485" to be "Ready" ...
	I0229 02:36:21.582506  369869 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:36:21.608694  369869 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.125662  369869 pod_ready.go:92] pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.125695  369869 pod_ready.go:81] duration metric: took 1.51697387s waiting for pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.125707  369869 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.141831  369869 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.141855  369869 pod_ready.go:81] duration metric: took 16.140002ms waiting for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.141864  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.154216  369869 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.154261  369869 pod_ready.go:81] duration metric: took 12.389751ms waiting for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.154276  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.166057  369869 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.166085  369869 pod_ready.go:81] duration metric: took 11.798242ms waiting for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.166098  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gr44w" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.179414  369869 pod_ready.go:92] pod "kube-proxy-gr44w" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.179437  369869 pod_ready.go:81] duration metric: took 13.331411ms waiting for pod "kube-proxy-gr44w" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.179447  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.576569  369869 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.576597  369869 pod_ready.go:81] duration metric: took 397.142516ms waiting for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.576611  369869 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:21.953781  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:36:21.954431  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:21.954685  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:24.880947  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:24.880985  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:24.939045  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:24.939079  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:24.987109  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:24.987144  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:25.049095  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:25.049131  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:25.091654  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:25.091686  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:25.153281  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:25.153326  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:25.169544  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:25.169575  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:25.294469  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:25.294504  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:25.346867  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:25.346900  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:25.388876  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:25.388921  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:27.937848  369508 system_pods.go:59] 8 kube-system pods found
	I0229 02:36:27.937878  369508 system_pods.go:61] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running
	I0229 02:36:27.937883  369508 system_pods.go:61] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running
	I0229 02:36:27.937888  369508 system_pods.go:61] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running
	I0229 02:36:27.937891  369508 system_pods.go:61] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running
	I0229 02:36:27.937894  369508 system_pods.go:61] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:36:27.937898  369508 system_pods.go:61] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running
	I0229 02:36:27.937903  369508 system_pods.go:61] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:36:27.937908  369508 system_pods.go:61] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:36:27.937922  369508 system_pods.go:74] duration metric: took 4.011440564s to wait for pod list to return data ...
	I0229 02:36:27.937933  369508 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:36:27.940602  369508 default_sa.go:45] found service account: "default"
	I0229 02:36:27.940623  369508 default_sa.go:55] duration metric: took 2.681589ms for default service account to be created ...
	I0229 02:36:27.940632  369508 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:36:27.947433  369508 system_pods.go:86] 8 kube-system pods found
	I0229 02:36:27.947455  369508 system_pods.go:89] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running
	I0229 02:36:27.947466  369508 system_pods.go:89] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running
	I0229 02:36:27.947472  369508 system_pods.go:89] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running
	I0229 02:36:27.947482  369508 system_pods.go:89] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running
	I0229 02:36:27.947491  369508 system_pods.go:89] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:36:27.947497  369508 system_pods.go:89] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running
	I0229 02:36:27.947508  369508 system_pods.go:89] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:36:27.947518  369508 system_pods.go:89] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:36:27.947531  369508 system_pods.go:126] duration metric: took 6.892538ms to wait for k8s-apps to be running ...
	I0229 02:36:27.947539  369508 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:36:27.947591  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:36:27.965730  369508 system_svc.go:56] duration metric: took 18.181663ms WaitForService to wait for kubelet.
	I0229 02:36:27.965756  369508 kubeadm.go:581] duration metric: took 4m25.215820473s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:36:27.965780  369508 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:36:27.970094  369508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:36:27.970123  369508 node_conditions.go:123] node cpu capacity is 2
	I0229 02:36:27.970138  369508 node_conditions.go:105] duration metric: took 4.347423ms to run NodePressure ...
	I0229 02:36:27.970152  369508 start.go:228] waiting for startup goroutines ...
	I0229 02:36:27.970162  369508 start.go:233] waiting for cluster config update ...
	I0229 02:36:27.970175  369508 start.go:242] writing updated cluster config ...
	I0229 02:36:27.970529  369508 ssh_runner.go:195] Run: rm -f paused
	I0229 02:36:28.020686  369508 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:36:28.022730  369508 out.go:177] * Done! kubectl is now configured to use "embed-certs-915633" cluster and "default" namespace by default
	I0229 02:36:25.585985  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:28.085278  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:26.954801  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:26.955093  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:30.583462  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:32.584198  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:34.585129  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:37.085551  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:39.584450  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:36.955344  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:36.955543  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:41.585000  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:44.083919  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:46.085694  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:48.583474  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:50.584026  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:53.084622  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:55.084729  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:57.084941  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:59.586329  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:56.957911  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:56.958178  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:02.085189  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:04.085672  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:06.586906  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:09.085130  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:11.583811  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:13.585179  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:16.083670  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:18.084884  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:20.584395  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:22.585487  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:24.586088  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:26.586608  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:29.084644  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:31.585292  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:34.083690  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:36.959509  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:37:36.959795  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:36.959812  370051 kubeadm.go:322] 
	I0229 02:37:36.959848  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:37:36.959887  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:37:36.959893  370051 kubeadm.go:322] 
	I0229 02:37:36.959937  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:37:36.959991  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:37:36.960142  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:37:36.960167  370051 kubeadm.go:322] 
	I0229 02:37:36.960282  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:37:36.960318  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:37:36.960362  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:37:36.960371  370051 kubeadm.go:322] 
	I0229 02:37:36.960482  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:37:36.960617  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:37:36.960756  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:37:36.960839  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:37:36.960951  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:37:36.961015  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:37:36.961366  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:37:36.961507  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:37:36.961616  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 02:37:36.961763  370051 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:37:36.961835  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:37:37.427665  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:37:37.443045  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:37:37.456937  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:37:37.456979  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:37:37.529093  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:37:37.529246  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:37:37.670260  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:37:37.670417  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:37:37.670548  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:37:37.904220  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:37:37.905569  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:37:37.914919  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:37:38.070911  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:37:38.072738  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:37:38.072860  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:37:38.072951  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:37:38.073049  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:37:38.073132  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:37:38.073230  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:37:38.073299  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:37:38.073376  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:37:38.073458  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:37:38.073566  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:37:38.073680  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:37:38.073720  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:37:38.073794  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:37:38.209805  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:37:38.305550  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:37:38.464715  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:37:38.623139  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:37:38.624364  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:37:36.084556  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:38.086561  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:38.625883  370051 out.go:204]   - Booting up control plane ...
	I0229 02:37:38.626039  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:37:38.630668  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:37:38.631740  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:37:38.632687  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:37:38.636043  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:37:40.583589  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:42.583968  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:44.584409  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:46.586413  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:49.084223  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:51.584770  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:53.584871  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:55.585299  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:58.084753  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:00.584432  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:03.085511  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:05.585519  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:08.085774  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:10.087984  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:12.584744  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:15.085757  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:17.584807  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:19.588130  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:18.637746  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:38:18.638616  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:18.638883  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:22.084442  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:24.085227  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:23.639374  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:23.639613  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:26.087774  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:28.584872  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:30.587375  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:33.085060  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:35.086106  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:33.640169  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:33.640468  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:37.584670  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:40.085797  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:42.585365  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:44.587079  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:46.590638  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:49.086500  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:51.584286  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:53.587405  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:53.640871  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:53.641147  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:56.084551  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:58.085668  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:00.086247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:02.588854  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:05.085163  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:07.090885  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:09.583687  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:11.585184  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:14.085800  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:16.086643  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:18.584073  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:21.084992  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:23.585496  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:25.586111  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:28.086464  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:33.642813  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:39:33.643083  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:39:33.643099  370051 kubeadm.go:322] 
	I0229 02:39:33.643153  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:39:33.643206  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:39:33.643213  370051 kubeadm.go:322] 
	I0229 02:39:33.643252  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:39:33.643296  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:39:33.643443  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:39:33.643455  370051 kubeadm.go:322] 
	I0229 02:39:33.643605  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:39:33.643655  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:39:33.643700  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:39:33.643714  370051 kubeadm.go:322] 
	I0229 02:39:33.643871  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:39:33.644040  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:39:33.644193  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:39:33.644272  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:39:33.644371  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:39:33.644412  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:39:33.644855  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:39:33.644972  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:39:33.645065  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:39:33.645132  370051 kubeadm.go:406] StartCluster complete in 8m8.138449101s
	I0229 02:39:33.645178  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:39:33.645255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:39:33.699121  370051 cri.go:89] found id: ""
	I0229 02:39:33.699154  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.699166  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:39:33.699174  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:39:33.699240  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:39:33.747229  370051 cri.go:89] found id: ""
	I0229 02:39:33.747260  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.747272  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:39:33.747279  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:39:33.747349  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:39:33.789303  370051 cri.go:89] found id: ""
	I0229 02:39:33.789334  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.789343  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:39:33.789350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:39:33.789413  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:39:33.832769  370051 cri.go:89] found id: ""
	I0229 02:39:33.832801  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.832814  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:39:33.832824  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:39:33.832891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:39:33.881508  370051 cri.go:89] found id: ""
	I0229 02:39:33.881543  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.881554  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:39:33.881571  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:39:33.881635  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:39:33.941691  370051 cri.go:89] found id: ""
	I0229 02:39:33.941728  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.941740  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:39:33.941749  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:39:33.941822  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:39:33.990639  370051 cri.go:89] found id: ""
	I0229 02:39:33.990681  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.990704  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:39:33.990713  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:39:33.990774  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:39:34.038426  370051 cri.go:89] found id: ""
	I0229 02:39:34.038460  370051 logs.go:276] 0 containers: []
	W0229 02:39:34.038470  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:39:34.038480  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:39:34.038497  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:39:34.054571  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:39:34.054604  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:39:34.131297  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:39:34.131323  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:39:34.131337  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:39:34.232302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:39:34.232349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:39:34.283314  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:39:34.283351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:39:34.336858  370051 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:39:34.336920  370051 out.go:239] * 
	W0229 02:39:34.336985  370051 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.337006  370051 out.go:239] * 
	W0229 02:39:34.337787  370051 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:39:34.340744  370051 out.go:177] 
	W0229 02:39:34.342096  370051 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.342137  370051 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:39:34.342160  370051 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:39:34.343540  370051 out.go:177] 
	I0229 02:39:30.584963  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:32.585599  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:34.588073  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:37.085513  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:39.584721  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:41.585072  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:44.086996  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:46.587437  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:49.083819  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:51.084472  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:53.085522  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:55.585518  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:58.084454  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:00.085075  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:02.588500  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:05.083707  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:07.084423  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:09.584552  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:11.590611  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:14.084618  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:16.597479  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:19.086312  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:21.586450  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:23.583798  369869 pod_ready.go:81] duration metric: took 4m0.007166298s waiting for pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace to be "Ready" ...
	E0229 02:40:23.583824  369869 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:40:23.583834  369869 pod_ready.go:38] duration metric: took 4m2.001316522s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:40:23.583860  369869 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:40:23.583899  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:23.584002  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:23.655958  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:23.655987  369869 cri.go:89] found id: ""
	I0229 02:40:23.655997  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:23.656057  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.661125  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:23.661199  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:23.712373  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:23.712400  369869 cri.go:89] found id: ""
	I0229 02:40:23.712410  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:23.712508  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.718149  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:23.718209  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:23.775835  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:23.775858  369869 cri.go:89] found id: ""
	I0229 02:40:23.775867  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:23.775923  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.780698  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:23.780792  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:23.825914  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:23.825939  369869 cri.go:89] found id: ""
	I0229 02:40:23.825949  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:23.826017  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.830870  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:23.830932  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:23.868737  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:23.868767  369869 cri.go:89] found id: ""
	I0229 02:40:23.868777  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:23.868841  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.873522  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:23.873598  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:23.918640  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:23.918663  369869 cri.go:89] found id: ""
	I0229 02:40:23.918671  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:23.918725  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.923456  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:23.923517  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:23.963045  369869 cri.go:89] found id: ""
	I0229 02:40:23.963071  369869 logs.go:276] 0 containers: []
	W0229 02:40:23.963080  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:23.963085  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:23.963136  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:24.006380  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:24.006402  369869 cri.go:89] found id: ""
	I0229 02:40:24.006409  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:24.006459  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:24.012228  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:24.012269  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:24.095110  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:24.095354  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:24.117199  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:24.117229  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:24.181064  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:24.181126  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:24.239267  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:24.239305  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:24.283248  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:24.283281  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:24.746786  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:24.746831  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:24.764451  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:24.764487  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:24.917582  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:24.917625  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:24.980095  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:24.980142  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:25.028219  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:25.028253  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:25.083840  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:25.083874  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:25.131148  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:25.131179  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:25.179314  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:25.179340  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:25.179415  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:25.179432  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:25.179455  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:25.179471  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:25.179479  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:35.181209  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:40:35.199982  369869 api_server.go:72] duration metric: took 4m15.785374734s to wait for apiserver process to appear ...
	I0229 02:40:35.200012  369869 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:40:35.200052  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:35.200109  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:35.241760  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:35.241786  369869 cri.go:89] found id: ""
	I0229 02:40:35.241795  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:35.241846  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.247188  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:35.247294  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:35.293992  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:35.294022  369869 cri.go:89] found id: ""
	I0229 02:40:35.294033  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:35.294098  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.298900  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:35.298971  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:35.340809  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:35.340835  369869 cri.go:89] found id: ""
	I0229 02:40:35.340843  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:35.340903  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.345913  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:35.345988  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:35.392027  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:35.392061  369869 cri.go:89] found id: ""
	I0229 02:40:35.392072  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:35.392140  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.397043  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:35.397120  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:35.452900  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:35.452931  369869 cri.go:89] found id: ""
	I0229 02:40:35.452942  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:35.453014  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.459221  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:35.459303  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:35.503530  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:35.503555  369869 cri.go:89] found id: ""
	I0229 02:40:35.503563  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:35.503615  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.509021  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:35.509083  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:35.553777  369869 cri.go:89] found id: ""
	I0229 02:40:35.553803  369869 logs.go:276] 0 containers: []
	W0229 02:40:35.553812  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:35.553818  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:35.553868  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:35.605234  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:35.605259  369869 cri.go:89] found id: ""
	I0229 02:40:35.605267  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:35.605333  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.610433  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:35.610465  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:36.030757  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:36.030807  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:36.047193  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:36.047224  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:36.105936  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:36.105983  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:36.169028  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:36.169080  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:36.241640  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:36.241678  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:36.284787  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:36.284822  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:36.333264  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:36.333293  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:36.385436  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:36.385468  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:36.463289  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:36.463491  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:36.485748  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:36.485782  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:36.604181  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:36.604218  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:36.659210  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:36.659247  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:36.704612  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:36.704640  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:36.704695  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:36.704706  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:36.704712  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:36.704719  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:36.704726  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:46.705868  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:40:46.711301  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 200:
	ok
	I0229 02:40:46.713000  369869 api_server.go:141] control plane version: v1.28.4
	I0229 02:40:46.713025  369869 api_server.go:131] duration metric: took 11.513005073s to wait for apiserver health ...
	I0229 02:40:46.713034  369869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:40:46.713061  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:46.713121  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:46.759486  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:46.759505  369869 cri.go:89] found id: ""
	I0229 02:40:46.759517  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:46.759581  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.764215  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:46.764299  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:46.805016  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:46.805042  369869 cri.go:89] found id: ""
	I0229 02:40:46.805049  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:46.805113  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.810213  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:46.810284  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:46.862825  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:46.862855  369869 cri.go:89] found id: ""
	I0229 02:40:46.862867  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:46.862923  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.867531  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:46.867588  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:46.914211  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:46.914247  369869 cri.go:89] found id: ""
	I0229 02:40:46.914258  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:46.914327  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.918857  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:46.918921  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:46.959981  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:46.960016  369869 cri.go:89] found id: ""
	I0229 02:40:46.960027  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:46.960095  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.964789  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:46.964846  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:47.009289  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:47.009313  369869 cri.go:89] found id: ""
	I0229 02:40:47.009322  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:47.009390  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:47.015339  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:47.015413  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:47.059195  369869 cri.go:89] found id: ""
	I0229 02:40:47.059227  369869 logs.go:276] 0 containers: []
	W0229 02:40:47.059239  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:47.059254  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:47.059306  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:47.103293  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:47.103323  369869 cri.go:89] found id: ""
	I0229 02:40:47.103334  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:47.103401  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:47.108048  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:47.108076  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:47.157407  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:47.157441  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:47.591202  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:47.591261  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:47.644877  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:47.644910  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:47.784217  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:47.784249  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:47.839113  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:47.839144  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:47.885581  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:47.885616  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:47.930971  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:47.931009  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:47.986352  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:47.986437  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:48.067103  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:48.067316  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:48.088373  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:48.088407  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:48.105750  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:48.105781  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:48.154640  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:48.154677  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:48.196009  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:48.196042  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:48.196112  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:48.196128  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:48.196137  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:48.196146  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:48.196155  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:58.203822  369869 system_pods.go:59] 8 kube-system pods found
	I0229 02:40:58.203853  369869 system_pods.go:61] "coredns-5dd5756b68-xj4sh" [e2741c05-81b2-4de6-8329-f88912d48160] Running
	I0229 02:40:58.203859  369869 system_pods.go:61] "etcd-default-k8s-diff-port-071485" [88b0e865-c53e-4829-a56a-2a3b6e405df4] Running
	I0229 02:40:58.203866  369869 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071485" [445fa1c9-589b-437d-92ca-0d15ee8228ae] Running
	I0229 02:40:58.203872  369869 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071485" [e3f60cdb-6214-4987-b692-a4921ece3895] Running
	I0229 02:40:58.203877  369869 system_pods.go:61] "kube-proxy-gr44w" [a74b553f-683a-4e1b-ac48-b4553d00b306] Running
	I0229 02:40:58.203881  369869 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071485" [4c1afe85-10be-45e5-8b99-6bd3cf12a828] Running
	I0229 02:40:58.203888  369869 system_pods.go:61] "metrics-server-57f55c9bc5-fpwzl" [5215d27e-4bf2-4331-89f2-24096dc96b90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:40:58.203893  369869 system_pods.go:61] "storage-provisioner" [d7b70f8e-1689-4526-a39f-eb8005cbecd2] Running
	I0229 02:40:58.203902  369869 system_pods.go:74] duration metric: took 11.49086169s to wait for pod list to return data ...
	I0229 02:40:58.203913  369869 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:40:58.207120  369869 default_sa.go:45] found service account: "default"
	I0229 02:40:58.207145  369869 default_sa.go:55] duration metric: took 3.22533ms for default service account to be created ...
	I0229 02:40:58.207154  369869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:40:58.213026  369869 system_pods.go:86] 8 kube-system pods found
	I0229 02:40:58.213056  369869 system_pods.go:89] "coredns-5dd5756b68-xj4sh" [e2741c05-81b2-4de6-8329-f88912d48160] Running
	I0229 02:40:58.213065  369869 system_pods.go:89] "etcd-default-k8s-diff-port-071485" [88b0e865-c53e-4829-a56a-2a3b6e405df4] Running
	I0229 02:40:58.213073  369869 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071485" [445fa1c9-589b-437d-92ca-0d15ee8228ae] Running
	I0229 02:40:58.213081  369869 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071485" [e3f60cdb-6214-4987-b692-a4921ece3895] Running
	I0229 02:40:58.213088  369869 system_pods.go:89] "kube-proxy-gr44w" [a74b553f-683a-4e1b-ac48-b4553d00b306] Running
	I0229 02:40:58.213094  369869 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071485" [4c1afe85-10be-45e5-8b99-6bd3cf12a828] Running
	I0229 02:40:58.213107  369869 system_pods.go:89] "metrics-server-57f55c9bc5-fpwzl" [5215d27e-4bf2-4331-89f2-24096dc96b90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:40:58.213117  369869 system_pods.go:89] "storage-provisioner" [d7b70f8e-1689-4526-a39f-eb8005cbecd2] Running
	I0229 02:40:58.213130  369869 system_pods.go:126] duration metric: took 5.970128ms to wait for k8s-apps to be running ...
	I0229 02:40:58.213142  369869 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:40:58.213204  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:40:58.230150  369869 system_svc.go:56] duration metric: took 16.998299ms WaitForService to wait for kubelet.
	I0229 02:40:58.230178  369869 kubeadm.go:581] duration metric: took 4m38.815578079s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:40:58.230245  369869 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:40:58.233660  369869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:40:58.233719  369869 node_conditions.go:123] node cpu capacity is 2
	I0229 02:40:58.233737  369869 node_conditions.go:105] duration metric: took 3.486117ms to run NodePressure ...
	I0229 02:40:58.233756  369869 start.go:228] waiting for startup goroutines ...
	I0229 02:40:58.233766  369869 start.go:233] waiting for cluster config update ...
	I0229 02:40:58.233777  369869 start.go:242] writing updated cluster config ...
	I0229 02:40:58.234079  369869 ssh_runner.go:195] Run: rm -f paused
	I0229 02:40:58.285415  369869 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:40:58.287433  369869 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-071485" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.424197643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174677424172050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ec59bc3-5b66-4250-82fd-3162cbbdb8a1 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.424799310Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6f7803a-fb37-497a-a0cf-3a685942245f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.424884009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6f7803a-fb37-497a-a0cf-3a685942245f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.425278999Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173898403089298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99cc0a7d8158b241a6363055d477e4538b67123266993ebc4dc7e6a9ab810e19,PodSandboxId:5dcd7325799d3ac205f9c49b90f57add472abce7a9a6ec7400e34d91b3c653e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173878169797352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22d0d5e3-3658-4122-adf1-8faffa8de817,},Annotations:map[string]string{io.kubernetes.container.hash: abba158a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab,PodSandboxId:9a3a5a98dfc7d11e1d336ca7115304ec25c8a76c37f7598640afbcfc07f9c1af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709173875201146415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2z5w8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39b5eb65-690b-488b-9bec-7cfabcc27829,},Annotations:map[string]string{io.kubernetes.container.hash: ab6762a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e,PodSandboxId:65bbb4d7efe5afd8099ff4ef00114c6ef456d6b4d5363221972138b81cfc0bc3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709173867581561724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7849f368-0bca-4c2b-ae
72-cbacef9bbb72,},Annotations:map[string]string{io.kubernetes.container.hash: b3086f3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173867580132672,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7
d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772,PodSandboxId:484ad89cf88e2554b842362cc426f924ff51d30e7f14bff7a23c0a1fd37b4661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709173862820309191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0fd2b2d3a34444351a58f9cc442592,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0,PodSandboxId:9d7b51641b06fd63936e813ccc91714206dffe2ae20ba89cee69829718659b22,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709173862822165374,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bddedf1d587af5333bf6d061dbebe3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8122a282,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75,PodSandboxId:2c3c97570a989ff886d9fdd97254fdcf1e146d45c438de3302180cae14b318f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709173862789331849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc56f6a18092022bffc9b777210b75f,},Annotations:map[string]string{io.kubernetes.container.hash: 3709
cc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5,PodSandboxId:efab7c788859d9985732ca2f1a43fdd5e21c04f334f127d75320136fc31028de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709173862790794588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4104973fb9e5b903cb363d606f23991,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6f7803a-fb37-497a-a0cf-3a685942245f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.447268256Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=9ad3f0ec-9261-47e4-b950-c34f6e24e017 name=/runtime.v1.RuntimeService/Status
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.447361179Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=9ad3f0ec-9261-47e4-b950-c34f6e24e017 name=/runtime.v1.RuntimeService/Status
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.476837097Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ffd0564a-72a6-4ede-a004-bc528bc4af0c name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.477135479Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5dcd7325799d3ac205f9c49b90f57add472abce7a9a6ec7400e34d91b3c653e9,Metadata:&PodSandboxMetadata{Name:busybox,Uid:22d0d5e3-3658-4122-adf1-8faffa8de817,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709173875052865190,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22d0d5e3-3658-4122-adf1-8faffa8de817,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-29T02:31:07.081040148Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a3a5a98dfc7d11e1d336ca7115304ec25c8a76c37f7598640afbcfc07f9c1af,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-2z5w8,Uid:39b5eb65-690b-488b-9bec-7cfabcc27829,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17091738749562793
15,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-2z5w8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39b5eb65-690b-488b-9bec-7cfabcc27829,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-29T02:31:07.081041436Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e28b35ab35ab6688edba176c8ec089dcd370f81136ab85e1c3ab0c66ab9ff151,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-zghwq,Uid:97018e51-c009-4e33-964b-9e9e4798a48a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709173873153830642,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-zghwq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97018e51-c009-4e33-964b-9e9e4798a48a,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-29T02:31:07.0
81044193Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:65bbb4d7efe5afd8099ff4ef00114c6ef456d6b4d5363221972138b81cfc0bc3,Metadata:&PodSandboxMetadata{Name:kube-proxy-cdc4l,Uid:7849f368-0bca-4c2b-ae72-cbacef9bbb72,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709173867400009785,Labels:map[string]string{controller-revision-hash: 79c5f556d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-cdc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7849f368-0bca-4c2b-ae72-cbacef9bbb72,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-29T02:31:07.081046617Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709173867398819888,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2024-02-29T02:31:07.081038637Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:efab7c788859d9985732ca2f1a43fdd5e21c04f334f127d75320136fc31028de,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-247751,Uid:d4104973fb9e5b903cb363d606f23991,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709173862583176545,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4104973fb9e5b903cb363d606f23991,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d4104973fb9e5b903cb363d606f23991,kubernetes.io/config.seen: 2024-02-29T02:31:02.077351816Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9d7b51641b06fd63936e813ccc91714206dffe2ae20ba89cee69829718659b22,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-247751,Uid:6bddedf1d587af5333bf6d061db
ebe3a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709173862573251590,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bddedf1d587af5333bf6d061dbebe3a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.114:2379,kubernetes.io/config.hash: 6bddedf1d587af5333bf6d061dbebe3a,kubernetes.io/config.seen: 2024-02-29T02:31:02.077349814Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c3c97570a989ff886d9fdd97254fdcf1e146d45c438de3302180cae14b318f9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-247751,Uid:4dc56f6a18092022bffc9b777210b75f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709173862569841403,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-247751,io.
kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc56f6a18092022bffc9b777210b75f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.114:8443,kubernetes.io/config.hash: 4dc56f6a18092022bffc9b777210b75f,kubernetes.io/config.seen: 2024-02-29T02:31:02.077350907Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:484ad89cf88e2554b842362cc426f924ff51d30e7f14bff7a23c0a1fd37b4661,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-247751,Uid:8a0fd2b2d3a34444351a58f9cc442592,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1709173862561822701,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0fd2b2d3a34444351a58f9cc442592,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8a0fd2b2d3a34444351a58f9cc442592,ku
bernetes.io/config.seen: 2024-02-29T02:31:02.077346031Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ffd0564a-72a6-4ede-a004-bc528bc4af0c name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.478553018Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7040f7f3-62b6-47f7-9fdd-f34062276d98 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.478644960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7040f7f3-62b6-47f7-9fdd-f34062276d98 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.478857590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173898403089298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99cc0a7d8158b241a6363055d477e4538b67123266993ebc4dc7e6a9ab810e19,PodSandboxId:5dcd7325799d3ac205f9c49b90f57add472abce7a9a6ec7400e34d91b3c653e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173878169797352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22d0d5e3-3658-4122-adf1-8faffa8de817,},Annotations:map[string]string{io.kubernetes.container.hash: abba158a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab,PodSandboxId:9a3a5a98dfc7d11e1d336ca7115304ec25c8a76c37f7598640afbcfc07f9c1af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709173875201146415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2z5w8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39b5eb65-690b-488b-9bec-7cfabcc27829,},Annotations:map[string]string{io.kubernetes.container.hash: ab6762a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e,PodSandboxId:65bbb4d7efe5afd8099ff4ef00114c6ef456d6b4d5363221972138b81cfc0bc3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709173867581561724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7849f368-0bca-4c2b-ae
72-cbacef9bbb72,},Annotations:map[string]string{io.kubernetes.container.hash: b3086f3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173867580132672,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7
d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772,PodSandboxId:484ad89cf88e2554b842362cc426f924ff51d30e7f14bff7a23c0a1fd37b4661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709173862820309191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0fd2b2d3a34444351a58f9cc442592,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0,PodSandboxId:9d7b51641b06fd63936e813ccc91714206dffe2ae20ba89cee69829718659b22,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709173862822165374,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bddedf1d587af5333bf6d061dbebe3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8122a282,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75,PodSandboxId:2c3c97570a989ff886d9fdd97254fdcf1e146d45c438de3302180cae14b318f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709173862789331849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc56f6a18092022bffc9b777210b75f,},Annotations:map[string]string{io.kubernetes.container.hash: 3709
cc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5,PodSandboxId:efab7c788859d9985732ca2f1a43fdd5e21c04f334f127d75320136fc31028de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709173862790794588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4104973fb9e5b903cb363d606f23991,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7040f7f3-62b6-47f7-9fdd-f34062276d98 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.488208893Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3b33237-1675-4956-a0d3-faed9bc4cee2 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.488277992Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3b33237-1675-4956-a0d3-faed9bc4cee2 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.490124486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c9108e2-3678-48ee-afad-a50216b71584 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.490456234Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174677490433318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c9108e2-3678-48ee-afad-a50216b71584 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.491320827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2083c72b-94b3-4a62-9a33-dfe5007020b5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.491376299Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2083c72b-94b3-4a62-9a33-dfe5007020b5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.491733928Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173898403089298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99cc0a7d8158b241a6363055d477e4538b67123266993ebc4dc7e6a9ab810e19,PodSandboxId:5dcd7325799d3ac205f9c49b90f57add472abce7a9a6ec7400e34d91b3c653e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173878169797352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22d0d5e3-3658-4122-adf1-8faffa8de817,},Annotations:map[string]string{io.kubernetes.container.hash: abba158a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab,PodSandboxId:9a3a5a98dfc7d11e1d336ca7115304ec25c8a76c37f7598640afbcfc07f9c1af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709173875201146415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2z5w8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39b5eb65-690b-488b-9bec-7cfabcc27829,},Annotations:map[string]string{io.kubernetes.container.hash: ab6762a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e,PodSandboxId:65bbb4d7efe5afd8099ff4ef00114c6ef456d6b4d5363221972138b81cfc0bc3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709173867581561724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7849f368-0bca-4c2b-ae
72-cbacef9bbb72,},Annotations:map[string]string{io.kubernetes.container.hash: b3086f3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173867580132672,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7
d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772,PodSandboxId:484ad89cf88e2554b842362cc426f924ff51d30e7f14bff7a23c0a1fd37b4661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709173862820309191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0fd2b2d3a34444351a58f9cc442592,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0,PodSandboxId:9d7b51641b06fd63936e813ccc91714206dffe2ae20ba89cee69829718659b22,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709173862822165374,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bddedf1d587af5333bf6d061dbebe3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8122a282,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75,PodSandboxId:2c3c97570a989ff886d9fdd97254fdcf1e146d45c438de3302180cae14b318f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709173862789331849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc56f6a18092022bffc9b777210b75f,},Annotations:map[string]string{io.kubernetes.container.hash: 3709
cc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5,PodSandboxId:efab7c788859d9985732ca2f1a43fdd5e21c04f334f127d75320136fc31028de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709173862790794588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4104973fb9e5b903cb363d606f23991,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2083c72b-94b3-4a62-9a33-dfe5007020b5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.545137568Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f61f4a5-77b5-4d74-9005-09d3d7d72f75 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.545207319Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f61f4a5-77b5-4d74-9005-09d3d7d72f75 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.547028218Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=779463f1-8c64-40e1-8a26-da059a4051bb name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.547362835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174677547339460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=779463f1-8c64-40e1-8a26-da059a4051bb name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.548064888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=826efa00-b43f-462a-b132-7102400642d3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.548126022Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=826efa00-b43f-462a-b132-7102400642d3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:44:37 no-preload-247751 crio[670]: time="2024-02-29 02:44:37.548393523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173898403089298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99cc0a7d8158b241a6363055d477e4538b67123266993ebc4dc7e6a9ab810e19,PodSandboxId:5dcd7325799d3ac205f9c49b90f57add472abce7a9a6ec7400e34d91b3c653e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173878169797352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22d0d5e3-3658-4122-adf1-8faffa8de817,},Annotations:map[string]string{io.kubernetes.container.hash: abba158a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab,PodSandboxId:9a3a5a98dfc7d11e1d336ca7115304ec25c8a76c37f7598640afbcfc07f9c1af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709173875201146415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2z5w8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39b5eb65-690b-488b-9bec-7cfabcc27829,},Annotations:map[string]string{io.kubernetes.container.hash: ab6762a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e,PodSandboxId:65bbb4d7efe5afd8099ff4ef00114c6ef456d6b4d5363221972138b81cfc0bc3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709173867581561724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7849f368-0bca-4c2b-ae
72-cbacef9bbb72,},Annotations:map[string]string{io.kubernetes.container.hash: b3086f3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173867580132672,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7
d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772,PodSandboxId:484ad89cf88e2554b842362cc426f924ff51d30e7f14bff7a23c0a1fd37b4661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709173862820309191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0fd2b2d3a34444351a58f9cc442592,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0,PodSandboxId:9d7b51641b06fd63936e813ccc91714206dffe2ae20ba89cee69829718659b22,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709173862822165374,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bddedf1d587af5333bf6d061dbebe3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8122a282,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75,PodSandboxId:2c3c97570a989ff886d9fdd97254fdcf1e146d45c438de3302180cae14b318f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709173862789331849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc56f6a18092022bffc9b777210b75f,},Annotations:map[string]string{io.kubernetes.container.hash: 3709
cc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5,PodSandboxId:efab7c788859d9985732ca2f1a43fdd5e21c04f334f127d75320136fc31028de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709173862790794588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4104973fb9e5b903cb363d606f23991,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=826efa00-b43f-462a-b132-7102400642d3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1d3ea01e4d000       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   e77531cab13a8       storage-provisioner
	99cc0a7d8158b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   5dcd7325799d3       busybox
	869cb90ce44f1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   9a3a5a98dfc7d       coredns-76f75df574-2z5w8
	1061c7e86aceb       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      13 minutes ago      Running             kube-proxy                1                   65bbb4d7efe5a       kube-proxy-cdc4l
	3c88c68c0c40f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   e77531cab13a8       storage-provisioner
	92977e2b17423       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      13 minutes ago      Running             etcd                      1                   9d7b51641b06f       etcd-no-preload-247751
	d2cd6c6c49c57       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      13 minutes ago      Running             kube-scheduler            1                   484ad89cf88e2       kube-scheduler-no-preload-247751
	5520037685c0c       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      13 minutes ago      Running             kube-controller-manager   1                   efab7c788859d       kube-controller-manager-no-preload-247751
	60cc548bfcd72       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      13 minutes ago      Running             kube-apiserver            1                   2c3c97570a989       kube-apiserver-no-preload-247751
	
	
	==> coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45956 - 20729 "HINFO IN 3196636296519869444.5891557949309614254. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.069741927s
	
	
	==> describe nodes <==
	Name:               no-preload-247751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-247751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=no-preload-247751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_21_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:21:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-247751
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:44:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:41:50 +0000   Thu, 29 Feb 2024 02:21:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:41:50 +0000   Thu, 29 Feb 2024 02:21:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:41:50 +0000   Thu, 29 Feb 2024 02:21:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:41:50 +0000   Thu, 29 Feb 2024 02:31:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.114
	  Hostname:    no-preload-247751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 881168a5061a46d9ae56f8a52fa75d96
	  System UUID:                881168a5-061a-46d9-ae56-f8a52fa75d96
	  Boot ID:                    2707afbd-f3e4-443c-abf7-896de325fc97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-76f75df574-2z5w8                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-no-preload-247751                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-no-preload-247751             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-no-preload-247751    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-cdc4l                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-no-preload-247751             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-zghwq              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-247751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-247751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-247751 status is now: NodeHasSufficientPID
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node no-preload-247751 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-247751 event: Registered Node no-preload-247751 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-247751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-247751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-247751 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-247751 event: Registered Node no-preload-247751 in Controller
	
	
	==> dmesg <==
	[Feb29 02:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052491] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042707] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.519441] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.403110] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.710311] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.595404] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.056156] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059678] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.214871] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.138888] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.258363] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[ +21.994355] kauditd_printk_skb: 130 callbacks suppressed
	[Feb29 02:31] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +5.739601] kauditd_printk_skb: 63 callbacks suppressed
	[  +5.730898] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.047717] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] <==
	{"level":"info","ts":"2024-02-29T02:31:03.736428Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:31:03.748603Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T02:31:03.748874Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.114:2380"}
	{"level":"info","ts":"2024-02-29T02:31:03.748914Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.114:2380"}
	{"level":"info","ts":"2024-02-29T02:31:03.755428Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d80e54998a205cf3","initial-advertise-peer-urls":["https://192.168.72.114:2380"],"listen-peer-urls":["https://192.168.72.114:2380"],"advertise-client-urls":["https://192.168.72.114:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.114:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T02:31:03.75555Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T02:31:04.778834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d80e54998a205cf3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T02:31:04.77892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d80e54998a205cf3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:31:04.779038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d80e54998a205cf3 received MsgPreVoteResp from d80e54998a205cf3 at term 2"}
	{"level":"info","ts":"2024-02-29T02:31:04.779057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d80e54998a205cf3 became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:04.779102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d80e54998a205cf3 received MsgVoteResp from d80e54998a205cf3 at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:04.77912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d80e54998a205cf3 became leader at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:04.779132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d80e54998a205cf3 elected leader d80e54998a205cf3 at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:04.784816Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:31:04.784746Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d80e54998a205cf3","local-member-attributes":"{Name:no-preload-247751 ClientURLs:[https://192.168.72.114:2379]}","request-path":"/0/members/d80e54998a205cf3/attributes","cluster-id":"fe5d4cbbe2066f7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:31:04.78585Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:31:04.78649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:31:04.78651Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:31:04.78965Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.114:2379"}
	{"level":"info","ts":"2024-02-29T02:31:04.791825Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-02-29T02:31:23.213571Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.252099ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-247751\" ","response":"range_response_count:1 size:4441"}
	{"level":"info","ts":"2024-02-29T02:31:23.213752Z","caller":"traceutil/trace.go:171","msg":"trace[2039286566] range","detail":"{range_begin:/registry/minions/no-preload-247751; range_end:; response_count:1; response_revision:582; }","duration":"177.451224ms","start":"2024-02-29T02:31:23.036282Z","end":"2024-02-29T02:31:23.213733Z","steps":["trace[2039286566] 'range keys from in-memory index tree'  (duration: 177.069838ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T02:41:04.842178Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":825}
	{"level":"info","ts":"2024-02-29T02:41:04.845205Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":825,"took":"1.983721ms","hash":939881084}
	{"level":"info","ts":"2024-02-29T02:41:04.845283Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":939881084,"revision":825,"compact-revision":-1}
	
	
	==> kernel <==
	 02:44:37 up 14 min,  0 users,  load average: 0.37, 0.18, 0.11
	Linux no-preload-247751 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] <==
	I0229 02:39:07.234770       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:41:06.238334       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:41:06.238793       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0229 02:41:07.240032       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:41:07.240286       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:41:07.240388       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:41:07.240055       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:41:07.240538       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:41:07.242411       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:42:07.241593       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:42:07.241695       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:42:07.241707       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:42:07.242760       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:42:07.242918       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:42:07.243082       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:44:07.242497       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:44:07.242840       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:44:07.242875       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:44:07.244025       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:44:07.244149       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:44:07.244175       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] <==
	I0229 02:38:49.285017       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:39:18.881400       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:39:19.297205       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:39:48.886881       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:39:49.305523       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:40:18.893659       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:40:19.315183       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:40:48.898917       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:40:49.323502       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:41:18.904654       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:41:19.333339       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:41:48.911761       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:41:49.345462       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 02:42:06.167182       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="372.443µs"
	E0229 02:42:18.918266       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:42:19.158812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="103.397µs"
	I0229 02:42:19.353740       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:42:48.923258       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:42:49.362185       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:43:18.928196       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:43:19.370494       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:43:48.933819       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:43:49.381178       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:44:18.941536       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:44:19.395499       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] <==
	I0229 02:31:07.824255       1 server_others.go:72] "Using iptables proxy"
	I0229 02:31:07.849544       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.114"]
	I0229 02:31:07.928723       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0229 02:31:07.928768       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:31:07.928781       1 server_others.go:168] "Using iptables Proxier"
	I0229 02:31:07.932029       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:31:07.933383       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0229 02:31:07.933424       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:31:07.936278       1 config.go:188] "Starting service config controller"
	I0229 02:31:07.936357       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:31:07.936387       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:31:07.936394       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:31:07.936903       1 config.go:315] "Starting node config controller"
	I0229 02:31:07.937054       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:31:08.037108       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:31:08.037176       1 shared_informer.go:318] Caches are synced for node config
	I0229 02:31:08.037282       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] <==
	I0229 02:31:04.117003       1 serving.go:380] Generated self-signed cert in-memory
	W0229 02:31:06.107036       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 02:31:06.107193       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 02:31:06.107332       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 02:31:06.107497       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 02:31:06.244356       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0229 02:31:06.244452       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:31:06.252759       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 02:31:06.252898       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 02:31:06.252988       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 02:31:06.255699       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:31:06.356753       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 02:42:02 no-preload-247751 kubelet[1288]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:42:02 no-preload-247751 kubelet[1288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:42:02 no-preload-247751 kubelet[1288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:42:06 no-preload-247751 kubelet[1288]: E0229 02:42:06.145492    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:42:19 no-preload-247751 kubelet[1288]: E0229 02:42:19.141585    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:42:32 no-preload-247751 kubelet[1288]: E0229 02:42:32.144098    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:42:44 no-preload-247751 kubelet[1288]: E0229 02:42:44.142714    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:42:57 no-preload-247751 kubelet[1288]: E0229 02:42:57.141845    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:43:02 no-preload-247751 kubelet[1288]: E0229 02:43:02.191226    1288 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:43:02 no-preload-247751 kubelet[1288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:43:02 no-preload-247751 kubelet[1288]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:43:02 no-preload-247751 kubelet[1288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:43:02 no-preload-247751 kubelet[1288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:43:08 no-preload-247751 kubelet[1288]: E0229 02:43:08.144371    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:43:20 no-preload-247751 kubelet[1288]: E0229 02:43:20.141199    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:43:35 no-preload-247751 kubelet[1288]: E0229 02:43:35.141745    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:43:49 no-preload-247751 kubelet[1288]: E0229 02:43:49.141749    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:44:02 no-preload-247751 kubelet[1288]: E0229 02:44:02.191663    1288 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:44:02 no-preload-247751 kubelet[1288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:44:02 no-preload-247751 kubelet[1288]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:44:02 no-preload-247751 kubelet[1288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:44:02 no-preload-247751 kubelet[1288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:44:03 no-preload-247751 kubelet[1288]: E0229 02:44:03.141908    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:44:18 no-preload-247751 kubelet[1288]: E0229 02:44:18.142241    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:44:30 no-preload-247751 kubelet[1288]: E0229 02:44:30.142608    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	
	
	==> storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] <==
	I0229 02:31:38.551746       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 02:31:38.567731       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 02:31:38.567872       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 02:31:55.974876       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 02:31:55.975155       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-247751_6a4373fb-9d8c-4ec5-9faf-b5aba65567de!
	I0229 02:31:55.976503       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad204bd3-d8b1-463b-b094-3972bea49d44", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-247751_6a4373fb-9d8c-4ec5-9faf-b5aba65567de became leader
	I0229 02:31:56.075646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-247751_6a4373fb-9d8c-4ec5-9faf-b5aba65567de!
	
	
	==> storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] <==
	I0229 02:31:07.745594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0229 02:31:37.748608       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-247751 -n no-preload-247751
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-247751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-zghwq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-247751 describe pod metrics-server-57f55c9bc5-zghwq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-247751 describe pod metrics-server-57f55c9bc5-zghwq: exit status 1 (66.182353ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-zghwq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-247751 describe pod metrics-server-57f55c9bc5-zghwq: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0229 02:36:36.514867  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:37:04.062523  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
E0229 02:38:10.388210  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
E0229 02:38:27.108300  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
E0229 02:39:09.039679  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 02:39:18.684633  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:39:33.433090  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-915633 -n embed-certs-915633
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-02-29 02:45:28.610907648 +0000 UTC m=+5686.304054557
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-915633 -n embed-certs-915633
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-915633 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-915633 logs -n 25: (2.259256317s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-117441 sudo cat                              | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo find                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo crio                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-117441                                       | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	| delete  | -p                                                     | disable-driver-mounts-542968 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | disable-driver-mounts-542968                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:23 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-915633            | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247751             | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071485  | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275488        | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-915633                 | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247751                  | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071485       | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:40 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275488             | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:26:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:26:36.132854  370051 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:26:36.133389  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133407  370051 out.go:304] Setting ErrFile to fd 2...
	I0229 02:26:36.133414  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133912  370051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:26:36.134959  370051 out.go:298] Setting JSON to false
	I0229 02:26:36.135907  370051 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7739,"bootTime":1709165857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:26:36.135982  370051 start.go:139] virtualization: kvm guest
	I0229 02:26:36.137916  370051 out.go:177] * [old-k8s-version-275488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:26:36.139510  370051 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:26:36.139543  370051 notify.go:220] Checking for updates...
	I0229 02:26:36.141206  370051 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:26:36.142776  370051 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:26:36.143982  370051 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:26:36.145097  370051 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:26:36.146170  370051 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:26:36.147751  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:26:36.148198  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.148298  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.163969  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0229 02:26:36.164373  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.164977  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.165003  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.165394  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.165584  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.167312  370051 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 02:26:36.168337  370051 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:26:36.168641  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.168683  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.184089  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0229 02:26:36.184605  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.185181  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.185210  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.185551  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.185723  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.222261  370051 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 02:26:36.223363  370051 start.go:299] selected driver: kvm2
	I0229 02:26:36.223374  370051 start.go:903] validating driver "kvm2" against &{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.223487  370051 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:26:36.224130  370051 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.224195  370051 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:26:36.239302  370051 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:26:36.239664  370051 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:26:36.239741  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:26:36.239755  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:26:36.239765  370051 start_flags.go:323] config:
	{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.239908  370051 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.241466  370051 out.go:177] * Starting control plane node old-k8s-version-275488 in cluster old-k8s-version-275488
	I0229 02:26:35.666509  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:38.738602  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:36.242536  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:26:36.242564  370051 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 02:26:36.242573  370051 cache.go:56] Caching tarball of preloaded images
	I0229 02:26:36.242641  370051 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 02:26:36.242651  370051 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 02:26:36.242742  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:26:36.242905  370051 start.go:365] acquiring machines lock for old-k8s-version-275488: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:26:44.818494  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:47.890482  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:53.970508  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:57.042448  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:03.122506  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:06.194415  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:12.274520  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:15.346558  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:21.426515  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:24.498557  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:30.578502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:33.650482  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:39.730548  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:42.802507  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:48.882487  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:51.954507  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:58.034498  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:01.106530  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:07.186513  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:10.258485  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:16.338519  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:19.410521  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:25.490436  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:28.562555  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:34.642534  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:37.714514  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:43.794519  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:46.866487  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:52.946514  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:56.018488  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:02.098512  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:05.170472  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:11.250485  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:14.322454  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:20.402450  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:23.474533  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:29.554541  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:32.626489  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:38.706558  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:41.778502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:47.858493  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:50.930489  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:57.010541  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:00.082537  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:06.162498  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:09.234502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:12.238620  369591 start.go:369] acquired machines lock for "no-preload-247751" in 4m33.303501223s
	I0229 02:30:12.238705  369591 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:12.238716  369591 fix.go:54] fixHost starting: 
	I0229 02:30:12.239171  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:12.239240  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:12.254984  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I0229 02:30:12.255490  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:12.255991  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:30:12.256012  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:12.256463  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:12.256668  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:12.256840  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:30:12.258341  369591 fix.go:102] recreateIfNeeded on no-preload-247751: state=Stopped err=<nil>
	I0229 02:30:12.258371  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	W0229 02:30:12.258522  369591 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:12.260176  369591 out.go:177] * Restarting existing kvm2 VM for "no-preload-247751" ...
	I0229 02:30:12.261521  369591 main.go:141] libmachine: (no-preload-247751) Calling .Start
	I0229 02:30:12.261678  369591 main.go:141] libmachine: (no-preload-247751) Ensuring networks are active...
	I0229 02:30:12.262375  369591 main.go:141] libmachine: (no-preload-247751) Ensuring network default is active
	I0229 02:30:12.262642  369591 main.go:141] libmachine: (no-preload-247751) Ensuring network mk-no-preload-247751 is active
	I0229 02:30:12.262962  369591 main.go:141] libmachine: (no-preload-247751) Getting domain xml...
	I0229 02:30:12.263526  369591 main.go:141] libmachine: (no-preload-247751) Creating domain...
	I0229 02:30:13.474816  369591 main.go:141] libmachine: (no-preload-247751) Waiting to get IP...
	I0229 02:30:13.475810  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:13.476251  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:13.476305  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:13.476230  370599 retry.go:31] will retry after 302.404435ms: waiting for machine to come up
	I0229 02:30:13.780776  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:13.781237  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:13.781265  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:13.781193  370599 retry.go:31] will retry after 364.673363ms: waiting for machine to come up
	I0229 02:30:12.236310  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:12.236352  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:30:12.238426  369508 machine.go:91] provisioned docker machine in 4m37.406828317s
	I0229 02:30:12.238513  369508 fix.go:56] fixHost completed within 4m37.429140371s
	I0229 02:30:12.238526  369508 start.go:83] releasing machines lock for "embed-certs-915633", held for 4m37.429164063s
	W0229 02:30:12.238553  369508 start.go:694] error starting host: provision: host is not running
	W0229 02:30:12.238763  369508 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0229 02:30:12.238784  369508 start.go:709] Will try again in 5 seconds ...
	I0229 02:30:14.148040  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:14.148530  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:14.148561  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:14.148471  370599 retry.go:31] will retry after 430.606986ms: waiting for machine to come up
	I0229 02:30:14.581180  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:14.581649  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:14.581679  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:14.581598  370599 retry.go:31] will retry after 557.726488ms: waiting for machine to come up
	I0229 02:30:15.141289  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:15.141736  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:15.141767  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:15.141675  370599 retry.go:31] will retry after 611.257074ms: waiting for machine to come up
	I0229 02:30:15.754464  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:15.754802  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:15.754831  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:15.754752  370599 retry.go:31] will retry after 905.484801ms: waiting for machine to come up
	I0229 02:30:16.661691  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:16.662072  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:16.662099  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:16.662020  370599 retry.go:31] will retry after 1.007584217s: waiting for machine to come up
	I0229 02:30:17.671565  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:17.672118  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:17.672159  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:17.672048  370599 retry.go:31] will retry after 933.310317ms: waiting for machine to come up
	I0229 02:30:18.607108  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:18.607473  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:18.607496  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:18.607426  370599 retry.go:31] will retry after 1.135856775s: waiting for machine to come up
	I0229 02:30:17.239210  369508 start.go:365] acquiring machines lock for embed-certs-915633: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:30:19.744656  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:19.745017  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:19.745047  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:19.744969  370599 retry.go:31] will retry after 2.184552748s: waiting for machine to come up
	I0229 02:30:21.932313  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:21.932764  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:21.932794  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:21.932711  370599 retry.go:31] will retry after 2.256573009s: waiting for machine to come up
	I0229 02:30:24.191551  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:24.191987  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:24.192016  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:24.191948  370599 retry.go:31] will retry after 3.0850751s: waiting for machine to come up
	I0229 02:30:27.278526  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:27.278941  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:27.278977  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:27.278914  370599 retry.go:31] will retry after 3.196492358s: waiting for machine to come up
	I0229 02:30:31.627482  369869 start.go:369] acquired machines lock for "default-k8s-diff-port-071485" in 4m6.129938439s
	I0229 02:30:31.627553  369869 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:31.627561  369869 fix.go:54] fixHost starting: 
	I0229 02:30:31.628005  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:31.628052  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:31.645217  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I0229 02:30:31.645607  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:31.646146  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:30:31.646179  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:31.646526  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:31.646754  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:31.646941  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:30:31.648372  369869 fix.go:102] recreateIfNeeded on default-k8s-diff-port-071485: state=Stopped err=<nil>
	I0229 02:30:31.648410  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	W0229 02:30:31.648603  369869 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:31.650778  369869 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-071485" ...
	I0229 02:30:30.479186  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.479664  369591 main.go:141] libmachine: (no-preload-247751) Found IP for machine: 192.168.72.114
	I0229 02:30:30.479694  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has current primary IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.479705  369591 main.go:141] libmachine: (no-preload-247751) Reserving static IP address...
	I0229 02:30:30.480161  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "no-preload-247751", mac: "52:54:00:fa:c1:ec", ip: "192.168.72.114"} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.480199  369591 main.go:141] libmachine: (no-preload-247751) DBG | skip adding static IP to network mk-no-preload-247751 - found existing host DHCP lease matching {name: "no-preload-247751", mac: "52:54:00:fa:c1:ec", ip: "192.168.72.114"}
	I0229 02:30:30.480213  369591 main.go:141] libmachine: (no-preload-247751) Reserved static IP address: 192.168.72.114
	I0229 02:30:30.480233  369591 main.go:141] libmachine: (no-preload-247751) Waiting for SSH to be available...
	I0229 02:30:30.480246  369591 main.go:141] libmachine: (no-preload-247751) DBG | Getting to WaitForSSH function...
	I0229 02:30:30.482557  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.482907  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.482935  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.483110  369591 main.go:141] libmachine: (no-preload-247751) DBG | Using SSH client type: external
	I0229 02:30:30.483136  369591 main.go:141] libmachine: (no-preload-247751) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa (-rw-------)
	I0229 02:30:30.483166  369591 main.go:141] libmachine: (no-preload-247751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:30:30.483180  369591 main.go:141] libmachine: (no-preload-247751) DBG | About to run SSH command:
	I0229 02:30:30.483197  369591 main.go:141] libmachine: (no-preload-247751) DBG | exit 0
	I0229 02:30:30.610329  369591 main.go:141] libmachine: (no-preload-247751) DBG | SSH cmd err, output: <nil>: 
	I0229 02:30:30.610691  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetConfigRaw
	I0229 02:30:30.611393  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:30.614007  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.614393  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.614426  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.614689  369591 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/config.json ...
	I0229 02:30:30.614872  369591 machine.go:88] provisioning docker machine ...
	I0229 02:30:30.614892  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:30.615096  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.615250  369591 buildroot.go:166] provisioning hostname "no-preload-247751"
	I0229 02:30:30.615272  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.615444  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.617525  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.617800  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.617835  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.617898  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.618095  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.618289  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.618424  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.618564  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:30.618790  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:30.618807  369591 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-247751 && echo "no-preload-247751" | sudo tee /etc/hostname
	I0229 02:30:30.740902  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-247751
	
	I0229 02:30:30.740952  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.743879  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.744353  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.744396  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.744584  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.744843  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.745014  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.745197  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.745351  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:30.745525  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:30.745543  369591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-247751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-247751/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-247751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:30:30.867175  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:30.867209  369591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:30:30.867229  369591 buildroot.go:174] setting up certificates
	I0229 02:30:30.867240  369591 provision.go:83] configureAuth start
	I0229 02:30:30.867248  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.867521  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:30.870143  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.870443  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.870464  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.870678  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.872992  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.873434  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.873463  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.873643  369591 provision.go:138] copyHostCerts
	I0229 02:30:30.873713  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:30:30.873740  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:30:30.873830  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:30:30.873937  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:30:30.873948  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:30:30.873992  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:30:30.874070  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:30:30.874080  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:30:30.874110  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:30:30.874240  369591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.no-preload-247751 san=[192.168.72.114 192.168.72.114 localhost 127.0.0.1 minikube no-preload-247751]
	I0229 02:30:30.921711  369591 provision.go:172] copyRemoteCerts
	I0229 02:30:30.921769  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:30:30.921793  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.924128  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.924436  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.924474  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.924628  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.924815  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.924975  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.925073  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.009229  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:30:31.035962  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:30:31.062947  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:30:31.089920  369591 provision.go:86] duration metric: configureAuth took 222.667724ms
	I0229 02:30:31.089947  369591 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:30:31.090145  369591 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:30:31.090256  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.092831  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.093148  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.093192  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.093338  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.093511  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.093699  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.093864  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.094032  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:31.094196  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:31.094211  369591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:30:31.381995  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:30:31.382023  369591 machine.go:91] provisioned docker machine in 767.136363ms
	I0229 02:30:31.382036  369591 start.go:300] post-start starting for "no-preload-247751" (driver="kvm2")
	I0229 02:30:31.382049  369591 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:30:31.382066  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.382560  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:30:31.382596  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.385219  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.385574  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.385602  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.385742  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.385955  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.386091  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.386254  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.469621  369591 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:30:31.474615  369591 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:30:31.474640  369591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:30:31.474702  369591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:30:31.474772  369591 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:30:31.474867  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:30:31.484964  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:31.512459  369591 start.go:303] post-start completed in 130.406384ms
	I0229 02:30:31.512519  369591 fix.go:56] fixHost completed within 19.27376704s
	I0229 02:30:31.512569  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.515169  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.515568  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.515596  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.515717  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.515944  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.516108  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.516260  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.516417  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:31.516592  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:31.516605  369591 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:30:31.627335  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173831.594794890
	
	I0229 02:30:31.627357  369591 fix.go:206] guest clock: 1709173831.594794890
	I0229 02:30:31.627366  369591 fix.go:219] Guest: 2024-02-29 02:30:31.59479489 +0000 UTC Remote: 2024-02-29 02:30:31.512545974 +0000 UTC m=+292.733991044 (delta=82.248916ms)
	I0229 02:30:31.627395  369591 fix.go:190] guest clock delta is within tolerance: 82.248916ms
	I0229 02:30:31.627403  369591 start.go:83] releasing machines lock for "no-preload-247751", held for 19.38873796s
	I0229 02:30:31.627429  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.627713  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:31.630486  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.630930  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.630959  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.631131  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631640  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631830  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631920  369591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:30:31.631983  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.632122  369591 ssh_runner.go:195] Run: cat /version.json
	I0229 02:30:31.632160  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.634658  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.634874  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635050  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.635079  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635348  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.635354  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.635379  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635478  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.635566  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.635633  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.635758  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.635768  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.635934  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.635940  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.719735  369591 ssh_runner.go:195] Run: systemctl --version
	I0229 02:30:31.739831  369591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:30:31.891138  369591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:30:31.899497  369591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:30:31.899569  369591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:30:31.921755  369591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:30:31.921785  369591 start.go:475] detecting cgroup driver to use...
	I0229 02:30:31.921896  369591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:30:31.938157  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:30:31.952761  369591 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:30:31.952834  369591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:30:31.966785  369591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:30:31.980931  369591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:30:32.091879  369591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:30:32.261190  369591 docker.go:233] disabling docker service ...
	I0229 02:30:32.261272  369591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:30:32.278862  369591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:30:32.295382  369591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:30:32.433426  369591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:30:32.557975  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:30:32.573791  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:30:32.595797  369591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:30:32.595848  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.608978  369591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:30:32.609042  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.621681  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.634251  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.647107  369591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:30:32.660478  369591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:30:32.672596  369591 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:30:32.672662  369591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:30:32.688480  369591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:30:32.700769  369591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:30:32.823703  369591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:30:33.004444  369591 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:30:33.004531  369591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:30:33.010801  369591 start.go:543] Will wait 60s for crictl version
	I0229 02:30:33.010862  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.015224  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:30:33.064627  369591 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:30:33.064721  369591 ssh_runner.go:195] Run: crio --version
	I0229 02:30:33.108265  369591 ssh_runner.go:195] Run: crio --version
	I0229 02:30:33.142639  369591 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 02:30:33.144169  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:33.147250  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:33.147609  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:33.147644  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:33.147836  369591 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 02:30:33.153138  369591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:33.169427  369591 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:30:33.169481  369591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:33.214079  369591 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 02:30:33.214113  369591 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:30:33.214193  369591 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:33.214216  369591 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.214252  369591 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.214276  369591 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.214335  369591 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.214323  369591 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.214354  369591 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0229 02:30:33.214241  369591 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.215862  369591 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.215880  369591 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0229 02:30:33.215862  369591 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.215928  369591 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.215947  369591 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:33.216082  369591 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.216136  369591 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.216252  369591 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.348095  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0229 02:30:33.434211  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.496911  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.499249  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.503235  369591 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0229 02:30:33.503274  369591 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.503307  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.507506  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.548265  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.551287  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.589427  369591 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0229 02:30:33.589474  369591 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.589523  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.590660  369591 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0229 02:30:33.590688  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.590708  369591 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.590763  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.636886  369591 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0229 02:30:33.636934  369591 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.637001  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.664221  369591 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0229 02:30:33.664266  369591 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.664316  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.691890  369591 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0229 02:30:33.691945  369591 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.691978  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.691993  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.692003  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.692096  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.692107  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.692104  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.692165  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.793616  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:33.793708  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.793723  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:33.793772  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:33.793839  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0229 02:30:33.793853  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:33.793856  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0229 02:30:33.793884  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0229 02:30:33.793902  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.793910  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:33.793914  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:33.793936  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:31.652037  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Start
	I0229 02:30:31.652202  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring networks are active...
	I0229 02:30:31.652984  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring network default is active
	I0229 02:30:31.653457  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring network mk-default-k8s-diff-port-071485 is active
	I0229 02:30:31.653909  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Getting domain xml...
	I0229 02:30:31.654724  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Creating domain...
	I0229 02:30:32.911561  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting to get IP...
	I0229 02:30:32.912505  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:32.912932  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:32.913032  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:32.912928  370716 retry.go:31] will retry after 285.213813ms: waiting for machine to come up
	I0229 02:30:33.199327  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.199733  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.199764  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.199678  370716 retry.go:31] will retry after 334.890426ms: waiting for machine to come up
	I0229 02:30:33.536492  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.536976  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.537006  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.536924  370716 retry.go:31] will retry after 344.946846ms: waiting for machine to come up
	I0229 02:30:33.883432  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.883911  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.883941  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.883858  370716 retry.go:31] will retry after 516.135135ms: waiting for machine to come up
	I0229 02:30:34.401167  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.401592  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.401621  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:34.401543  370716 retry.go:31] will retry after 538.013174ms: waiting for machine to come up
	I0229 02:30:34.941529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.942080  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.942116  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:34.942039  370716 retry.go:31] will retry after 883.013858ms: waiting for machine to come up
	I0229 02:30:33.850786  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0229 02:30:33.850868  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0229 02:30:33.850977  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:34.154343  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:36.987957  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (3.194013383s)
	I0229 02:30:36.987999  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0229 02:30:36.988100  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.194139784s)
	I0229 02:30:36.988127  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0229 02:30:36.988148  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.194207246s)
	I0229 02:30:36.988178  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0229 02:30:36.988156  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:36.988191  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.194323563s)
	I0229 02:30:36.988206  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0229 02:30:36.988236  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:36.988269  369591 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.833890629s)
	I0229 02:30:36.988240  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.13724749s)
	I0229 02:30:36.988310  369591 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0229 02:30:36.988331  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0229 02:30:36.988343  369591 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:36.988375  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:36.993483  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:38.351556  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.363290185s)
	I0229 02:30:38.351599  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0229 02:30:38.351633  369591 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:38.351632  369591 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.358113254s)
	I0229 02:30:38.351686  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0229 02:30:38.351705  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:38.351782  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:35.827402  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:35.827906  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:35.827932  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:35.827872  370716 retry.go:31] will retry after 902.653821ms: waiting for machine to come up
	I0229 02:30:36.732470  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:36.732925  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:36.732957  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:36.732863  370716 retry.go:31] will retry after 1.322376383s: waiting for machine to come up
	I0229 02:30:38.057306  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:38.057842  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:38.057874  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:38.057790  370716 retry.go:31] will retry after 1.16249498s: waiting for machine to come up
	I0229 02:30:39.221714  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:39.222197  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:39.222236  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:39.222156  370716 retry.go:31] will retry after 1.912383064s: waiting for machine to come up
	I0229 02:30:42.350149  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.998331984s)
	I0229 02:30:42.350198  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0229 02:30:42.350214  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.99848453s)
	I0229 02:30:42.350266  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0229 02:30:42.350305  369591 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:42.350357  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:41.135736  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:41.136113  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:41.136144  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:41.136058  370716 retry.go:31] will retry after 2.823296742s: waiting for machine to come up
	I0229 02:30:43.960885  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:43.961677  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:43.961703  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:43.961582  370716 retry.go:31] will retry after 3.266272258s: waiting for machine to come up
	I0229 02:30:44.528869  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.178478896s)
	I0229 02:30:44.528915  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0229 02:30:44.528947  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:44.529014  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:46.991074  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462030604s)
	I0229 02:30:46.991103  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0229 02:30:46.991129  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:46.991195  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:47.229005  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:47.229478  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:47.229511  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:47.229417  370716 retry.go:31] will retry after 3.429712893s: waiting for machine to come up
	I0229 02:30:51.887858  370051 start.go:369] acquired machines lock for "old-k8s-version-275488" in 4m15.644916266s
	I0229 02:30:51.887935  370051 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:51.887944  370051 fix.go:54] fixHost starting: 
	I0229 02:30:51.888374  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:51.888428  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:51.905851  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0229 02:30:51.906292  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:51.906778  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:30:51.906806  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:51.907250  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:51.907459  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:30:51.907631  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetState
	I0229 02:30:51.909061  370051 fix.go:102] recreateIfNeeded on old-k8s-version-275488: state=Stopped err=<nil>
	I0229 02:30:51.909093  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	W0229 02:30:51.909251  370051 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:51.911318  370051 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275488" ...
	I0229 02:30:50.662939  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.663341  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Found IP for machine: 192.168.61.233
	I0229 02:30:50.663366  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Reserving static IP address...
	I0229 02:30:50.663404  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has current primary IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.663745  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-071485", mac: "52:54:00:81:f9:08", ip: "192.168.61.233"} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.663781  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Reserved static IP address: 192.168.61.233
	I0229 02:30:50.663804  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | skip adding static IP to network mk-default-k8s-diff-port-071485 - found existing host DHCP lease matching {name: "default-k8s-diff-port-071485", mac: "52:54:00:81:f9:08", ip: "192.168.61.233"}
	I0229 02:30:50.663819  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for SSH to be available...
	I0229 02:30:50.663830  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Getting to WaitForSSH function...
	I0229 02:30:50.665924  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.666270  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.666306  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.666411  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Using SSH client type: external
	I0229 02:30:50.666435  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa (-rw-------)
	I0229 02:30:50.666464  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:30:50.666477  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | About to run SSH command:
	I0229 02:30:50.666489  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | exit 0
	I0229 02:30:50.794598  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | SSH cmd err, output: <nil>: 
	I0229 02:30:50.795011  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetConfigRaw
	I0229 02:30:50.795753  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:50.798443  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.798796  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.798822  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.799151  369869 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/config.json ...
	I0229 02:30:50.799410  369869 machine.go:88] provisioning docker machine ...
	I0229 02:30:50.799440  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:50.799684  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:50.799937  369869 buildroot.go:166] provisioning hostname "default-k8s-diff-port-071485"
	I0229 02:30:50.799963  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:50.800129  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:50.802457  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.802786  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.802813  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.802923  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:50.803087  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.803281  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.803393  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:50.803527  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:50.803744  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:50.803757  369869 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-071485 && echo "default-k8s-diff-port-071485" | sudo tee /etc/hostname
	I0229 02:30:50.930812  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-071485
	
	I0229 02:30:50.930849  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:50.933650  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.934017  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.934057  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.934217  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:50.934458  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.934651  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.934813  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:50.934964  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:50.935141  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:50.935159  369869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-071485' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-071485/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-071485' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:30:51.057233  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:51.057266  369869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:30:51.057307  369869 buildroot.go:174] setting up certificates
	I0229 02:30:51.057321  369869 provision.go:83] configureAuth start
	I0229 02:30:51.057335  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:51.057615  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:51.060233  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.060563  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.060595  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.060707  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.062583  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.062889  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.062938  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.063065  369869 provision.go:138] copyHostCerts
	I0229 02:30:51.063121  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:30:51.063140  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:30:51.063193  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:30:51.063290  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:30:51.063304  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:30:51.063332  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:30:51.063396  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:30:51.063403  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:30:51.063420  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:30:51.063482  369869 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-071485 san=[192.168.61.233 192.168.61.233 localhost 127.0.0.1 minikube default-k8s-diff-port-071485]
	I0229 02:30:51.180356  369869 provision.go:172] copyRemoteCerts
	I0229 02:30:51.180417  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:30:51.180446  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.182981  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.183262  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.183295  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.183465  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.183656  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.183814  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.183958  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.270548  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:30:51.297136  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 02:30:51.323133  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:30:51.349241  369869 provision.go:86] duration metric: configureAuth took 291.905825ms
	I0229 02:30:51.349269  369869 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:30:51.349453  369869 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:30:51.349529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.352119  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.352473  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.352503  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.352658  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.352839  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.353009  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.353122  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.353304  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:51.353480  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:51.353495  369869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:30:51.639987  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:30:51.640022  369869 machine.go:91] provisioned docker machine in 840.591751ms
	I0229 02:30:51.640041  369869 start.go:300] post-start starting for "default-k8s-diff-port-071485" (driver="kvm2")
	I0229 02:30:51.640057  369869 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:30:51.640087  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.640450  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:30:51.640486  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.643118  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.643427  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.643464  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.643661  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.643871  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.644025  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.644164  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.730150  369869 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:30:51.735109  369869 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:30:51.735135  369869 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:30:51.735207  369869 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:30:51.735298  369869 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:30:51.735416  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:30:51.745416  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:51.771727  369869 start.go:303] post-start completed in 131.66845ms
	I0229 02:30:51.771756  369869 fix.go:56] fixHost completed within 20.144195498s
	I0229 02:30:51.771782  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.774300  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.774582  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.774610  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.774744  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.774972  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.775153  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.775295  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.775481  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:51.775648  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:51.775659  369869 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:30:51.887656  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173851.865903243
	
	I0229 02:30:51.887683  369869 fix.go:206] guest clock: 1709173851.865903243
	I0229 02:30:51.887691  369869 fix.go:219] Guest: 2024-02-29 02:30:51.865903243 +0000 UTC Remote: 2024-02-29 02:30:51.771760886 +0000 UTC m=+266.432013426 (delta=94.142357ms)
	I0229 02:30:51.887738  369869 fix.go:190] guest clock delta is within tolerance: 94.142357ms
	I0229 02:30:51.887744  369869 start.go:83] releasing machines lock for "default-k8s-diff-port-071485", held for 20.260217484s
	I0229 02:30:51.887771  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.888047  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:51.890930  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.891264  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.891294  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.891491  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892002  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892209  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892299  369869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:30:51.892370  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.892472  369869 ssh_runner.go:195] Run: cat /version.json
	I0229 02:30:51.892503  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.895178  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895415  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895591  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.895626  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895769  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.895800  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895820  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.895966  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.896055  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.896141  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.896212  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.896277  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.896367  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.896447  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.976085  369869 ssh_runner.go:195] Run: systemctl --version
	I0229 02:30:52.001946  369869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:30:52.156753  369869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:30:52.164196  369869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:30:52.164302  369869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:30:52.189176  369869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:30:52.189201  369869 start.go:475] detecting cgroup driver to use...
	I0229 02:30:52.189281  369869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:30:52.207647  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:30:52.223752  369869 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:30:52.223842  369869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:30:52.246026  369869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:30:52.262180  369869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:30:52.409077  369869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:30:52.583777  369869 docker.go:233] disabling docker service ...
	I0229 02:30:52.583850  369869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:30:52.601434  369869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:30:52.617382  369869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:30:52.757258  369869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:30:52.898036  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:30:52.915787  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:30:52.939344  369869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:30:52.939417  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.951659  369869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:30:52.951722  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.963072  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.974800  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.986490  369869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:30:52.998630  369869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:30:53.009783  369869 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:30:53.009862  369869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:30:53.026356  369869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:30:53.038720  369869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:30:53.171220  369869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:30:53.326032  369869 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:30:53.326102  369869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:30:53.332369  369869 start.go:543] Will wait 60s for crictl version
	I0229 02:30:53.332431  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:30:53.336784  369869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:30:53.378780  369869 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:30:53.378902  369869 ssh_runner.go:195] Run: crio --version
	I0229 02:30:53.411158  369869 ssh_runner.go:195] Run: crio --version
	I0229 02:30:53.447038  369869 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 02:30:49.053324  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.062103665s)
	I0229 02:30:49.053353  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0229 02:30:49.053378  369591 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:49.053426  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:49.910791  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0229 02:30:49.910854  369591 cache_images.go:123] Successfully loaded all cached images
	I0229 02:30:49.910862  369591 cache_images.go:92] LoadImages completed in 16.696734078s
	I0229 02:30:49.910994  369591 ssh_runner.go:195] Run: crio config
	I0229 02:30:49.961413  369591 cni.go:84] Creating CNI manager for ""
	I0229 02:30:49.961435  369591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:30:49.961456  369591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:30:49.961509  369591 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.114 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-247751 NodeName:no-preload-247751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:30:49.961701  369591 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-247751"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:30:49.961801  369591 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-247751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:30:49.961866  369591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 02:30:49.973105  369591 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:30:49.973170  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:30:49.983178  369591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0229 02:30:50.001511  369591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 02:30:50.019574  369591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0229 02:30:50.037993  369591 ssh_runner.go:195] Run: grep 192.168.72.114	control-plane.minikube.internal$ /etc/hosts
	I0229 02:30:50.042075  369591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:50.054761  369591 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751 for IP: 192.168.72.114
	I0229 02:30:50.054796  369591 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:30:50.054976  369591 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:30:50.055031  369591 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:30:50.055146  369591 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/client.key
	I0229 02:30:50.055243  369591 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.key.9adeb8c5
	I0229 02:30:50.055310  369591 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.key
	I0229 02:30:50.055440  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:30:50.055481  369591 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:30:50.055502  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:30:50.055542  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:30:50.055577  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:30:50.055658  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:30:50.055728  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:50.056454  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:30:50.083764  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:30:50.110733  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:30:50.139180  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:30:50.167000  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:30:50.194044  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:30:50.220671  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:30:50.247561  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:30:50.274577  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:30:50.300997  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:30:50.327718  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:30:50.355463  369591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:30:50.374921  369591 ssh_runner.go:195] Run: openssl version
	I0229 02:30:50.381614  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:30:50.393546  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.398532  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.398594  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.404719  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:30:50.416507  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:30:50.428072  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.433031  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.433106  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.439174  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:30:50.450778  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:30:50.462238  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.467219  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.467269  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.473395  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:30:50.484817  369591 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:30:50.489643  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:30:50.496274  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:30:50.502579  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:30:50.508665  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:30:50.514827  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:30:50.520958  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:30:50.527032  369591 kubeadm.go:404] StartCluster: {Name:no-preload-247751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-247751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:30:50.527147  369591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:30:50.527194  369591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:30:50.565834  369591 cri.go:89] found id: ""
	I0229 02:30:50.565931  369591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:30:50.577305  369591 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:30:50.577354  369591 kubeadm.go:636] restartCluster start
	I0229 02:30:50.577408  369591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:30:50.587881  369591 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:50.588896  369591 kubeconfig.go:92] found "no-preload-247751" server: "https://192.168.72.114:8443"
	I0229 02:30:50.591223  369591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:30:50.601374  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:50.601434  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:50.613730  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:51.102422  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:51.102539  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:51.116483  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:51.601564  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:51.601657  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:51.615481  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:52.102039  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:52.102123  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:52.121300  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:52.601999  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:52.602093  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:52.618701  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.102291  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:53.102403  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:53.117898  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.602410  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:53.602496  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:53.618760  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.448437  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:53.451649  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:53.451998  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:53.452052  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:53.452302  369869 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 02:30:53.458709  369869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:53.477744  369869 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:30:53.477831  369869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:53.527511  369869 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 02:30:53.527593  369869 ssh_runner.go:195] Run: which lz4
	I0229 02:30:53.532370  369869 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:30:53.537149  369869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:30:53.537179  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 02:30:51.912520  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .Start
	I0229 02:30:51.912688  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring networks are active...
	I0229 02:30:51.913511  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network default is active
	I0229 02:30:51.913929  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network mk-old-k8s-version-275488 is active
	I0229 02:30:51.914378  370051 main.go:141] libmachine: (old-k8s-version-275488) Getting domain xml...
	I0229 02:30:51.915191  370051 main.go:141] libmachine: (old-k8s-version-275488) Creating domain...
	I0229 02:30:53.179261  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting to get IP...
	I0229 02:30:53.180359  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.180800  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.180922  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.180789  370858 retry.go:31] will retry after 282.360524ms: waiting for machine to come up
	I0229 02:30:53.465135  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.465708  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.465742  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.465651  370858 retry.go:31] will retry after 341.876004ms: waiting for machine to come up
	I0229 02:30:53.809322  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.809734  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.809876  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.809797  370858 retry.go:31] will retry after 356.208548ms: waiting for machine to come up
	I0229 02:30:54.167329  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.167824  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.167852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.167760  370858 retry.go:31] will retry after 395.76503ms: waiting for machine to come up
	I0229 02:30:54.565496  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.565976  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.566004  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.565933  370858 retry.go:31] will retry after 617.898012ms: waiting for machine to come up
	I0229 02:30:55.185679  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:55.186193  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:55.186237  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:55.186144  370858 retry.go:31] will retry after 911.947678ms: waiting for machine to come up
	I0229 02:30:56.099334  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:56.099788  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:56.099815  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:56.099726  370858 retry.go:31] will retry after 1.132066509s: waiting for machine to come up
	I0229 02:30:54.102304  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:54.102485  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:54.123193  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:54.601763  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:54.601890  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:54.621846  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.102417  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:55.102503  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:55.129010  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.601478  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:55.601532  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:55.620169  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:56.101701  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:56.101776  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:56.121369  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:56.601447  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:56.601550  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:56.617079  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.101509  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:57.101648  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:57.121691  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.601658  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:57.601754  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:57.620357  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:58.101829  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:58.101921  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:58.115818  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:58.602403  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:58.602509  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:58.621857  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.599398  369869 crio.go:444] Took 2.067052 seconds to copy over tarball
	I0229 02:30:55.599501  369869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:30:58.543850  369869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944309258s)
	I0229 02:30:58.543884  369869 crio.go:451] Took 2.944447 seconds to extract the tarball
	I0229 02:30:58.543896  369869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:30:58.592492  369869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:58.751479  369869 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:30:58.751509  369869 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:30:58.751576  369869 ssh_runner.go:195] Run: crio config
	I0229 02:30:58.813487  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:30:58.813515  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:30:58.813540  369869 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:30:58.813566  369869 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.233 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-071485 NodeName:default-k8s-diff-port-071485 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:30:58.813785  369869 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.233
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-071485"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:30:58.813898  369869 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-071485 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-071485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0229 02:30:58.813971  369869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:30:58.826199  369869 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:30:58.826324  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:30:58.837384  369869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0229 02:30:58.856023  369869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:30:58.876432  369869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0229 02:30:58.900684  369869 ssh_runner.go:195] Run: grep 192.168.61.233	control-plane.minikube.internal$ /etc/hosts
	I0229 02:30:58.905249  369869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:58.920007  369869 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485 for IP: 192.168.61.233
	I0229 02:30:58.920046  369869 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:30:58.920249  369869 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:30:58.920319  369869 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:30:58.920432  369869 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/client.key
	I0229 02:30:58.995037  369869 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.key.b3fc8ab0
	I0229 02:30:58.995173  369869 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.key
	I0229 02:30:58.995377  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:30:58.995430  369869 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:30:58.995451  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:30:58.995503  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:30:58.995543  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:30:58.995590  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:30:58.995653  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:58.996607  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:30:59.026487  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:30:59.054725  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:30:59.082553  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:30:59.110374  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:30:59.141972  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:30:59.170097  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:30:59.201206  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:30:59.232790  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:30:59.263940  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:30:59.292401  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:30:59.321920  369869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:30:59.343921  369869 ssh_runner.go:195] Run: openssl version
	I0229 02:30:59.351308  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:30:59.364059  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.369212  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.369302  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.375683  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:30:59.389046  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:30:59.404101  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.409433  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.409491  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.416126  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:30:59.429674  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:30:59.443405  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.448931  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.448991  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.455800  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:30:59.469013  369869 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:30:59.474745  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:30:59.481689  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:30:59.488868  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:30:59.496380  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:30:59.503593  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:30:59.510485  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:30:59.517770  369869 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-071485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-071485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.233 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:30:59.517894  369869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:30:59.517941  369869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:30:59.564631  369869 cri.go:89] found id: ""
	I0229 02:30:59.564718  369869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:30:59.578812  369869 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:30:59.578881  369869 kubeadm.go:636] restartCluster start
	I0229 02:30:59.578954  369869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:30:59.592900  369869 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:59.593909  369869 kubeconfig.go:92] found "default-k8s-diff-port-071485" server: "https://192.168.61.233:8444"
	I0229 02:30:59.596083  369869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:30:59.609384  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.609466  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.625617  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.110139  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.110282  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.127301  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.233610  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:57.234113  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:57.234145  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:57.234063  370858 retry.go:31] will retry after 1.238348525s: waiting for machine to come up
	I0229 02:30:58.474146  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:58.474696  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:58.474733  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:58.474642  370858 retry.go:31] will retry after 1.373712981s: waiting for machine to come up
	I0229 02:30:59.850075  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:59.850504  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:59.850526  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:59.850460  370858 retry.go:31] will retry after 2.156069813s: waiting for machine to come up
	I0229 02:30:59.101727  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.101812  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.120465  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:59.602060  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.602155  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.620588  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.102108  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.102203  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.120822  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.602443  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.602545  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.616796  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.616835  369591 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:00.616858  369591 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:00.616873  369591 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:00.616940  369591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:00.661747  369591 cri.go:89] found id: ""
	I0229 02:31:00.661869  369591 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:00.684098  369591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:00.696989  369591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:00.697059  369591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:00.708553  369591 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:00.708583  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:00.827929  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.578572  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.818119  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.892891  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.964926  369591 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:01.965037  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:02.466098  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:02.965290  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:03.465897  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:03.483060  369591 api_server.go:72] duration metric: took 1.518135432s to wait for apiserver process to appear ...
	I0229 02:31:03.483103  369591 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:03.483127  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:00.610179  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.610299  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.630460  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:01.109543  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:01.109680  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:01.129578  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:01.610203  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:01.610301  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:01.630078  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.109835  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:02.109945  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:02.127400  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.610160  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:02.610269  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:02.630581  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:03.109702  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:03.109836  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:03.129754  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:03.610303  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:03.610389  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:03.629702  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:04.110325  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:04.110459  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:04.128740  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:04.610305  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:04.610403  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:04.624716  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:05.110349  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:05.110457  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:05.130070  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.007911  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:02.008381  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:02.008409  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:02.008330  370858 retry.go:31] will retry after 1.864134048s: waiting for machine to come up
	I0229 02:31:03.873997  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:03.874606  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:03.874653  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:03.874547  370858 retry.go:31] will retry after 2.45659808s: waiting for machine to come up
	I0229 02:31:06.111554  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:06.111581  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:06.111596  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.191055  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:06.191090  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:06.483401  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.489220  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:06.489254  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:06.983921  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.988354  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:06.988430  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:07.483305  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:07.489830  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0229 02:31:07.497146  369591 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:31:07.497187  369591 api_server.go:131] duration metric: took 4.014075718s to wait for apiserver health ...
	I0229 02:31:07.497201  369591 cni.go:84] Creating CNI manager for ""
	I0229 02:31:07.497210  369591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:07.498785  369591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:07.500032  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:31:07.530625  369591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:31:07.594249  369591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:31:07.604940  369591 system_pods.go:59] 8 kube-system pods found
	I0229 02:31:07.604973  369591 system_pods.go:61] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:31:07.604980  369591 system_pods.go:61] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:31:07.604989  369591 system_pods.go:61] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:31:07.604995  369591 system_pods.go:61] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:31:07.605003  369591 system_pods.go:61] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:31:07.605015  369591 system_pods.go:61] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:31:07.605022  369591 system_pods.go:61] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:31:07.605032  369591 system_pods.go:61] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:31:07.605052  369591 system_pods.go:74] duration metric: took 10.776743ms to wait for pod list to return data ...
	I0229 02:31:07.605061  369591 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:31:07.608034  369591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:31:07.608059  369591 node_conditions.go:123] node cpu capacity is 2
	I0229 02:31:07.608073  369591 node_conditions.go:105] duration metric: took 3.004467ms to run NodePressure ...
	I0229 02:31:07.608096  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:07.975871  369591 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:31:07.980949  369591 kubeadm.go:787] kubelet initialised
	I0229 02:31:07.980970  369591 kubeadm.go:788] duration metric: took 5.071971ms waiting for restarted kubelet to initialise ...
	I0229 02:31:07.980979  369591 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:07.986764  369591 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:07.992673  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "coredns-76f75df574-2z5w8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.992698  369591 pod_ready.go:81] duration metric: took 5.911106ms waiting for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:07.992707  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "coredns-76f75df574-2z5w8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.992717  369591 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:07.997300  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "etcd-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.997322  369591 pod_ready.go:81] duration metric: took 4.594827ms waiting for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:07.997330  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "etcd-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.997335  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.004032  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-apiserver-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.004052  369591 pod_ready.go:81] duration metric: took 6.71117ms waiting for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.004060  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-apiserver-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.004066  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.009947  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.009985  369591 pod_ready.go:81] duration metric: took 5.909502ms waiting for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.010001  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.010009  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.398938  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-proxy-cdc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.398965  369591 pod_ready.go:81] duration metric: took 388.944943ms waiting for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.398975  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-proxy-cdc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.398982  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.797706  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-scheduler-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.797733  369591 pod_ready.go:81] duration metric: took 398.745142ms waiting for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.797744  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-scheduler-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.797751  369591 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:09.198467  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:09.198496  369591 pod_ready.go:81] duration metric: took 400.737315ms waiting for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:09.198506  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:09.198511  369591 pod_ready.go:38] duration metric: took 1.217523271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:09.198530  369591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:31:09.211194  369591 ops.go:34] apiserver oom_adj: -16
	I0229 02:31:09.211222  369591 kubeadm.go:640] restartCluster took 18.633858862s
	I0229 02:31:09.211232  369591 kubeadm.go:406] StartCluster complete in 18.684207766s
	I0229 02:31:09.211263  369591 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:09.211346  369591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:31:09.212899  369591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:09.213213  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:31:09.213318  369591 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:31:09.213406  369591 addons.go:69] Setting storage-provisioner=true in profile "no-preload-247751"
	I0229 02:31:09.213426  369591 addons.go:69] Setting default-storageclass=true in profile "no-preload-247751"
	I0229 02:31:09.213446  369591 addons.go:69] Setting metrics-server=true in profile "no-preload-247751"
	I0229 02:31:09.213463  369591 addons.go:234] Setting addon metrics-server=true in "no-preload-247751"
	I0229 02:31:09.213465  369591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-247751"
	I0229 02:31:09.213463  369591 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0229 02:31:09.213472  369591 addons.go:243] addon metrics-server should already be in state true
	I0229 02:31:09.213435  369591 addons.go:234] Setting addon storage-provisioner=true in "no-preload-247751"
	W0229 02:31:09.213515  369591 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:31:09.213519  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.213541  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.213915  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213924  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213944  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.213944  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.213943  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213978  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.218976  369591 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-247751" context rescaled to 1 replicas
	I0229 02:31:09.219015  369591 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:31:09.220657  369591 out.go:177] * Verifying Kubernetes components...
	I0229 02:31:09.221954  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:31:09.230064  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0229 02:31:09.230528  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.231030  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.231053  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.231526  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.231762  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.233032  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0229 02:31:09.233487  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.233929  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I0229 02:31:09.234003  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.234028  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.234293  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.234406  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.234784  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.234811  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.235009  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.235068  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.235163  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.235631  369591 addons.go:234] Setting addon default-storageclass=true in "no-preload-247751"
	W0229 02:31:09.235651  369591 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:31:09.235679  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.235738  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.235772  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.236123  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.236157  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.250756  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0229 02:31:09.251190  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.251830  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.251855  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.252228  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.252403  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.254210  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.256240  369591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:09.257522  369591 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:31:09.257537  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:31:09.257552  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.255418  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0229 02:31:09.255485  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0229 02:31:09.258003  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.258129  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.258432  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.258457  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.258664  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.258676  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.258822  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.258983  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.259278  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.259313  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.259533  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.261295  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.261320  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.262706  369591 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:31:05.610163  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:05.610319  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:05.627782  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:06.110424  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:06.110521  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:06.129628  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:06.610193  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:06.610330  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:06.624176  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:07.110249  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:07.110354  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:07.129955  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:07.609462  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:07.609536  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:07.623687  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:08.110263  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:08.110407  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:08.126900  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:08.610447  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:08.610520  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:08.625182  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.109675  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:09.109759  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:09.124637  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.610399  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:09.610520  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:09.630681  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.630715  369869 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:09.630757  369869 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:09.630777  369869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:09.630844  369869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:09.683876  369869 cri.go:89] found id: ""
	I0229 02:31:09.683963  369869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:09.706059  369869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:09.719868  369869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:09.719939  369869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:09.734591  369869 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:09.734622  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:09.862689  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:09.263808  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:31:09.263830  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:31:09.263849  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.261760  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.261947  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.263890  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.264339  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.264522  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.264704  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.266885  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.267339  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.267358  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.267533  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.267649  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.267782  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.267862  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.302813  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I0229 02:31:09.303329  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.303878  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.303909  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.304305  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.304509  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.306147  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.306434  369591 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:31:09.306454  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:31:09.306472  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.309029  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.309345  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.309382  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.309670  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.309872  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.310048  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.310193  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.402579  369591 node_ready.go:35] waiting up to 6m0s for node "no-preload-247751" to be "Ready" ...
	I0229 02:31:09.402756  369591 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:31:09.420259  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:31:09.426629  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:31:09.426655  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:31:09.446028  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:31:09.457219  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:31:09.457244  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:31:09.504028  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:31:09.504054  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:31:09.554137  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:31:10.485560  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.039492326s)
	I0229 02:31:10.485633  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.485646  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.485928  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065634917s)
	I0229 02:31:10.485970  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.485986  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.486053  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.486072  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.486092  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.486104  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.486112  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.486254  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.486287  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.486304  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.486320  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.487538  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.487556  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.487566  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.487543  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.487582  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.487579  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.494355  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.494374  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.494614  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.494635  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.494633  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.559201  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.005004802s)
	I0229 02:31:10.559258  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.559276  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.559592  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.559614  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.559625  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.559633  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.559899  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.559915  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.559926  369591 addons.go:470] Verifying addon metrics-server=true in "no-preload-247751"
	I0229 02:31:10.561833  369591 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:31:06.333259  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:06.333776  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:06.333811  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:06.333733  370858 retry.go:31] will retry after 3.223893936s: waiting for machine to come up
	I0229 02:31:09.559349  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:09.559937  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:09.559968  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:09.559891  370858 retry.go:31] will retry after 5.278822831s: waiting for machine to come up
	I0229 02:31:10.560171  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.563240  369591 addons.go:505] enable addons completed in 1.349905679s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:31:11.408006  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:10.805438  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.016546  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.132323  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.212201  369869 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:11.212309  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:11.713366  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.212866  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.713327  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.732027  369869 api_server.go:72] duration metric: took 1.519826457s to wait for apiserver process to appear ...
	I0229 02:31:12.732056  369869 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:12.732078  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.109299  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:15.109349  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:15.109368  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.166169  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:15.166209  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:15.232359  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.267052  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:15.267099  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.096073  369508 start.go:369] acquired machines lock for "embed-certs-915633" in 58.856797615s
	I0229 02:31:16.096132  369508 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:31:16.096144  369508 fix.go:54] fixHost starting: 
	I0229 02:31:16.096651  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:16.096692  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:16.115912  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0229 02:31:16.116419  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:16.116967  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:31:16.116999  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:16.117362  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:16.117562  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:16.117742  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:31:16.119589  369508 fix.go:102] recreateIfNeeded on embed-certs-915633: state=Stopped err=<nil>
	I0229 02:31:16.119614  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	W0229 02:31:16.119809  369508 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:31:16.121566  369508 out.go:177] * Restarting existing kvm2 VM for "embed-certs-915633" ...
	I0229 02:31:14.842498  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843049  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has current primary IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843083  370051 main.go:141] libmachine: (old-k8s-version-275488) Found IP for machine: 192.168.39.160
	I0229 02:31:14.843112  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserving static IP address...
	I0229 02:31:14.843485  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.843510  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | skip adding static IP to network mk-old-k8s-version-275488 - found existing host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"}
	I0229 02:31:14.843525  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserved static IP address: 192.168.39.160
	I0229 02:31:14.843535  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Getting to WaitForSSH function...
	I0229 02:31:14.843553  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting for SSH to be available...
	I0229 02:31:14.845739  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846087  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.846120  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846289  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH client type: external
	I0229 02:31:14.846319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa (-rw-------)
	I0229 02:31:14.846355  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:31:14.846372  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | About to run SSH command:
	I0229 02:31:14.846390  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | exit 0
	I0229 02:31:14.979384  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | SSH cmd err, output: <nil>: 
	I0229 02:31:14.979896  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetConfigRaw
	I0229 02:31:14.980716  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:14.983852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984278  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.984319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984639  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:31:14.984865  370051 machine.go:88] provisioning docker machine ...
	I0229 02:31:14.984890  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:14.985140  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985324  370051 buildroot.go:166] provisioning hostname "old-k8s-version-275488"
	I0229 02:31:14.985347  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985494  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:14.988036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988438  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.988464  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988633  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:14.988829  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989003  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989174  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:14.989361  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:14.989604  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:14.989621  370051 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275488 && echo "old-k8s-version-275488" | sudo tee /etc/hostname
	I0229 02:31:15.125564  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275488
	
	I0229 02:31:15.125605  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.128963  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129570  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.129652  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129735  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.129996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130185  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130380  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.130616  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.130872  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.130900  370051 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275488/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:15.272298  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:15.272337  370051 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:31:15.272368  370051 buildroot.go:174] setting up certificates
	I0229 02:31:15.272385  370051 provision.go:83] configureAuth start
	I0229 02:31:15.272402  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:15.272772  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:15.276382  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.276838  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.276869  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.277051  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.279927  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280298  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.280326  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280505  370051 provision.go:138] copyHostCerts
	I0229 02:31:15.280555  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:31:15.280566  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:31:15.280619  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:31:15.280749  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:31:15.280764  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:31:15.280789  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:31:15.280857  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:31:15.280871  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:31:15.280891  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:31:15.280954  370051 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275488 san=[192.168.39.160 192.168.39.160 localhost 127.0.0.1 minikube old-k8s-version-275488]
	I0229 02:31:15.360428  370051 provision.go:172] copyRemoteCerts
	I0229 02:31:15.360487  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:31:15.360512  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.363540  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.363931  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.363966  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.364154  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.364337  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.364495  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.364622  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.453643  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:31:15.483233  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 02:31:15.512164  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:31:15.543453  370051 provision.go:86] duration metric: configureAuth took 271.048547ms
	I0229 02:31:15.543484  370051 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:31:15.543705  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:31:15.543816  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.546472  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.546807  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.546835  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.547049  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.547272  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547455  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547662  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.547861  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.548035  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.548052  370051 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:31:15.835533  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:31:15.835572  370051 machine.go:91] provisioned docker machine in 850.691497ms
	I0229 02:31:15.835589  370051 start.go:300] post-start starting for "old-k8s-version-275488" (driver="kvm2")
	I0229 02:31:15.835604  370051 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:31:15.835635  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:15.835995  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:31:15.836025  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.838946  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839297  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.839330  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839460  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.839665  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.839839  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.840008  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.925849  370051 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:31:15.931227  370051 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:31:15.931260  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:31:15.931363  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:31:15.931465  370051 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:31:15.931574  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:31:15.942500  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:15.972803  370051 start.go:303] post-start completed in 137.19736ms
	I0229 02:31:15.972838  370051 fix.go:56] fixHost completed within 24.084893996s
	I0229 02:31:15.972873  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.975698  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976063  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.976093  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976279  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.976518  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976659  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976795  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.976959  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.977119  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.977130  370051 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:31:16.095892  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173876.041987567
	
	I0229 02:31:16.095917  370051 fix.go:206] guest clock: 1709173876.041987567
	I0229 02:31:16.095927  370051 fix.go:219] Guest: 2024-02-29 02:31:16.041987567 +0000 UTC Remote: 2024-02-29 02:31:15.972843681 +0000 UTC m=+279.886639354 (delta=69.143886ms)
	I0229 02:31:16.095954  370051 fix.go:190] guest clock delta is within tolerance: 69.143886ms
	I0229 02:31:16.095962  370051 start.go:83] releasing machines lock for "old-k8s-version-275488", held for 24.208056775s
	I0229 02:31:16.095996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.096336  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:16.099518  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100016  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.100060  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100189  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100751  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100955  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.101035  370051 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:31:16.101084  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.101167  370051 ssh_runner.go:195] Run: cat /version.json
	I0229 02:31:16.101190  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.104588  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.104638  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105000  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105059  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105101  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105335  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105546  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105590  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105821  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105832  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106002  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:16.106028  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106180  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.732828  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.739797  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:15.739827  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.232355  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:16.240421  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:16.240462  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.732451  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:16.740118  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 200:
	ok
	I0229 02:31:16.748529  369869 api_server.go:141] control plane version: v1.28.4
	I0229 02:31:16.748567  369869 api_server.go:131] duration metric: took 4.0165029s to wait for apiserver health ...
	I0229 02:31:16.748580  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:31:16.748588  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:16.750561  369869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:16.194120  370051 ssh_runner.go:195] Run: systemctl --version
	I0229 02:31:16.220808  370051 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:31:16.386082  370051 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:31:16.393419  370051 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:31:16.393512  370051 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:31:16.418966  370051 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:31:16.419003  370051 start.go:475] detecting cgroup driver to use...
	I0229 02:31:16.419087  370051 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:31:16.444372  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:31:16.466354  370051 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:31:16.466430  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:31:16.488710  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:31:16.509561  370051 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:31:16.651716  370051 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:31:16.840453  370051 docker.go:233] disabling docker service ...
	I0229 02:31:16.840538  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:31:16.869611  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:31:16.890123  370051 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:31:17.047701  370051 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:31:17.225457  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:31:17.248553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:31:17.275486  370051 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 02:31:17.275572  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.290350  370051 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:31:17.290437  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.304093  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.320562  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.339790  370051 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:31:17.356570  370051 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:31:17.371208  370051 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:31:17.371303  370051 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:31:17.390748  370051 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:31:17.405750  370051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:31:17.555023  370051 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:31:17.754419  370051 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:31:17.754508  370051 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:31:17.760190  370051 start.go:543] Will wait 60s for crictl version
	I0229 02:31:17.760245  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:17.765195  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:31:17.815839  370051 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:31:17.815953  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.857470  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.896796  370051 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 02:31:13.906892  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:15.907106  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:16.914513  369591 node_ready.go:49] node "no-preload-247751" has status "Ready":"True"
	I0229 02:31:16.914545  369591 node_ready.go:38] duration metric: took 7.511932085s waiting for node "no-preload-247751" to be "Ready" ...
	I0229 02:31:16.914560  369591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:16.925133  369591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.940518  369591 pod_ready.go:92] pod "coredns-76f75df574-2z5w8" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:16.940553  369591 pod_ready.go:81] duration metric: took 15.382701ms waiting for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.940568  369591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.122967  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Start
	I0229 02:31:16.123141  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring networks are active...
	I0229 02:31:16.124019  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring network default is active
	I0229 02:31:16.124630  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring network mk-embed-certs-915633 is active
	I0229 02:31:16.125118  369508 main.go:141] libmachine: (embed-certs-915633) Getting domain xml...
	I0229 02:31:16.126026  369508 main.go:141] libmachine: (embed-certs-915633) Creating domain...
	I0229 02:31:17.664537  369508 main.go:141] libmachine: (embed-certs-915633) Waiting to get IP...
	I0229 02:31:17.665883  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:17.666462  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:17.666595  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:17.666455  371066 retry.go:31] will retry after 193.172159ms: waiting for machine to come up
	I0229 02:31:17.861043  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:17.861754  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:17.861781  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:17.861651  371066 retry.go:31] will retry after 298.133474ms: waiting for machine to come up
	I0229 02:31:18.161304  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:18.161851  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:18.161886  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:18.161818  371066 retry.go:31] will retry after 402.680342ms: waiting for machine to come up
	I0229 02:31:18.566482  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:18.567145  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:18.567165  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:18.567068  371066 retry.go:31] will retry after 536.886613ms: waiting for machine to come up
	I0229 02:31:19.106090  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:19.106797  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:19.106823  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:19.106714  371066 retry.go:31] will retry after 583.032631ms: waiting for machine to come up
	I0229 02:31:19.691531  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:19.692096  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:19.692127  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:19.692000  371066 retry.go:31] will retry after 780.156818ms: waiting for machine to come up
	I0229 02:31:16.752375  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:31:16.783785  369869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:31:16.816646  369869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:31:16.829430  369869 system_pods.go:59] 8 kube-system pods found
	I0229 02:31:16.829480  369869 system_pods.go:61] "coredns-5dd5756b68-652db" [d989183e-dc0d-4913-8eab-fdfac0cf7ad7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:31:16.829491  369869 system_pods.go:61] "etcd-default-k8s-diff-port-071485" [aba29f47-cf0e-4ee5-8d18-7647b36369e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:31:16.829501  369869 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071485" [26a426b2-d5b7-456e-a733-3317009974ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:31:16.829517  369869 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071485" [a896f9fa-991f-44bb-bd97-02fac3494eea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:31:16.829528  369869 system_pods.go:61] "kube-proxy-g976s" [bc750be0-ae2b-4033-b65b-f1cccaebf32f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:31:16.829536  369869 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071485" [d99d25bf-25f4-4057-aedb-fc5ba797af47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:31:16.829544  369869 system_pods.go:61] "metrics-server-57f55c9bc5-86frx" [0ad81c0d-3f9a-45d8-93d8-bbb9e276d5b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:31:16.829560  369869 system_pods.go:61] "storage-provisioner" [92683c3e-04c1-4cef-988d-3b8beb7d4399] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:31:16.829570  369869 system_pods.go:74] duration metric: took 12.896339ms to wait for pod list to return data ...
	I0229 02:31:16.829584  369869 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:31:16.837494  369869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:31:16.837524  369869 node_conditions.go:123] node cpu capacity is 2
	I0229 02:31:16.837535  369869 node_conditions.go:105] duration metric: took 7.942051ms to run NodePressure ...
	I0229 02:31:16.837560  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:17.293873  369869 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:31:17.300874  369869 kubeadm.go:787] kubelet initialised
	I0229 02:31:17.300907  369869 kubeadm.go:788] duration metric: took 7.00259ms waiting for restarted kubelet to initialise ...
	I0229 02:31:17.300919  369869 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:17.315838  369869 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-652db" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.328228  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "coredns-5dd5756b68-652db" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.328265  369869 pod_ready.go:81] duration metric: took 12.396088ms waiting for pod "coredns-5dd5756b68-652db" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.328278  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "coredns-5dd5756b68-652db" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.328287  369869 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.335458  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.335487  369869 pod_ready.go:81] duration metric: took 7.145351ms waiting for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.335497  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.335505  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.356278  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.356365  369869 pod_ready.go:81] duration metric: took 20.849982ms waiting for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.356385  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.356396  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:19.376170  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:17.898162  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:17.901332  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.901809  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:17.901840  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.902046  370051 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:31:17.907256  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:17.924135  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:31:17.924218  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:17.986923  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:17.986992  370051 ssh_runner.go:195] Run: which lz4
	I0229 02:31:17.992110  370051 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:31:17.997252  370051 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:31:17.997287  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 02:31:20.124958  370051 crio.go:444] Took 2.132885 seconds to copy over tarball
	I0229 02:31:20.125075  370051 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:31:18.948383  369591 pod_ready.go:102] pod "etcd-no-preload-247751" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:20.950330  369591 pod_ready.go:92] pod "etcd-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:20.950359  369591 pod_ready.go:81] duration metric: took 4.009782336s waiting for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:20.950372  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.460878  369591 pod_ready.go:92] pod "kube-apiserver-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.460907  369591 pod_ready.go:81] duration metric: took 1.510525429s waiting for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.460922  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.468463  369591 pod_ready.go:92] pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.468487  369591 pod_ready.go:81] duration metric: took 7.556807ms waiting for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.468497  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.476459  369591 pod_ready.go:92] pod "kube-proxy-cdc4l" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.476488  369591 pod_ready.go:81] duration metric: took 7.983254ms waiting for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.476501  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.482564  369591 pod_ready.go:92] pod "kube-scheduler-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.482589  369591 pod_ready.go:81] duration metric: took 6.080532ms waiting for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.482598  369591 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:20.474186  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:20.474741  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:20.474784  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:20.474647  371066 retry.go:31] will retry after 845.550951ms: waiting for machine to come up
	I0229 02:31:21.322246  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:21.323007  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:21.323031  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:21.322935  371066 retry.go:31] will retry after 1.085864892s: waiting for machine to come up
	I0229 02:31:22.410244  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:22.410735  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:22.410766  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:22.410687  371066 retry.go:31] will retry after 1.587558593s: waiting for machine to come up
	I0229 02:31:24.000303  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:24.000914  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:24.000944  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:24.000828  371066 retry.go:31] will retry after 2.058374822s: waiting for machine to come up
	I0229 02:31:21.867552  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:23.972250  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:23.981829  369869 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:23.981860  369869 pod_ready.go:81] duration metric: took 6.625453699s waiting for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.981875  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g976s" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.994568  369869 pod_ready.go:92] pod "kube-proxy-g976s" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:23.994597  369869 pod_ready.go:81] duration metric: took 12.712769ms waiting for pod "kube-proxy-g976s" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.994609  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:24.002085  369869 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:24.002110  369869 pod_ready.go:81] duration metric: took 7.492788ms waiting for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:24.002133  369869 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.625489  370051 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.500380961s)
	I0229 02:31:23.625526  370051 crio.go:451] Took 3.500531 seconds to extract the tarball
	I0229 02:31:23.625536  370051 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:31:23.671458  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:23.714048  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:23.714087  370051 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:31:23.714189  370051 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.714213  370051 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.714309  370051 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 02:31:23.714424  370051 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.714269  370051 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.714461  370051 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.714519  370051 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.714192  370051 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.716086  370051 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.716076  370051 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716088  370051 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.716143  370051 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.716081  370051 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.716275  370051 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 02:31:23.838722  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.844569  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 02:31:23.853089  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.857738  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.864060  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.865519  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.926256  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.997349  370051 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 02:31:23.997401  370051 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.997463  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.010625  370051 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 02:31:24.010674  370051 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 02:31:24.010722  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083140  370051 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 02:31:24.083203  370051 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 02:31:24.083232  370051 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 02:31:24.083247  370051 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.083266  370051 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.083269  370051 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.083308  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083319  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083364  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083166  370051 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 02:31:24.083426  370051 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.083471  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123878  370051 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 02:31:24.123928  370051 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.123972  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123982  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:24.123973  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 02:31:24.124043  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.124051  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.124097  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.124153  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.152226  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.270585  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 02:31:24.305436  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 02:31:24.305532  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 02:31:24.305621  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 02:31:24.305629  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 02:31:24.305799  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 02:31:24.316950  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 02:31:24.635837  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:24.791670  370051 cache_images.go:92] LoadImages completed in 1.077558745s
	W0229 02:31:24.791798  370051 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0229 02:31:24.791902  370051 ssh_runner.go:195] Run: crio config
	I0229 02:31:24.851132  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:31:24.851164  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:24.851189  370051 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:31:24.851213  370051 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275488 NodeName:old-k8s-version-275488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 02:31:24.851423  370051 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-275488
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.160:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:31:24.851524  370051 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275488 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:31:24.851598  370051 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 02:31:24.864237  370051 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:31:24.864330  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:31:24.879552  370051 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 02:31:24.901027  370051 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:31:24.920638  370051 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 02:31:24.941894  370051 ssh_runner.go:195] Run: grep 192.168.39.160	control-plane.minikube.internal$ /etc/hosts
	I0229 02:31:24.947439  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:24.962396  370051 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488 for IP: 192.168.39.160
	I0229 02:31:24.962435  370051 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:24.962621  370051 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:31:24.962673  370051 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:31:24.962781  370051 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/client.key
	I0229 02:31:24.962851  370051 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key.80b25619
	I0229 02:31:24.962919  370051 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key
	I0229 02:31:24.963087  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:31:24.963126  370051 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:31:24.963138  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:31:24.963185  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:31:24.963213  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:31:24.963245  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:31:24.963296  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:24.963980  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:31:24.996049  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:31:25.030503  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:31:25.057695  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:31:25.091982  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:31:25.126636  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:31:25.156613  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:31:25.186480  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:31:25.221012  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:31:25.254122  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:31:25.282646  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:31:25.312624  370051 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:31:25.335020  370051 ssh_runner.go:195] Run: openssl version
	I0229 02:31:25.342920  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:31:25.355808  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361349  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361433  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.368335  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:31:25.380799  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:31:25.393069  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398466  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398539  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.404776  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:31:25.416735  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:31:25.428884  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434503  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434584  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.441187  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:31:25.453174  370051 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:31:25.458712  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:31:25.466032  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:31:25.473895  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:31:25.482948  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:31:25.491808  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:31:25.499003  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:31:25.506691  370051 kubeadm.go:404] StartCluster: {Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:25.506829  370051 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:31:25.506883  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:25.551867  370051 cri.go:89] found id: ""
	I0229 02:31:25.551970  370051 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:31:25.564446  370051 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:31:25.564476  370051 kubeadm.go:636] restartCluster start
	I0229 02:31:25.564545  370051 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:31:25.576275  370051 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:25.577406  370051 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-275488" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:31:25.578043  370051 kubeconfig.go:146] "old-k8s-version-275488" context is missing from /home/jenkins/minikube-integration/18063-316644/kubeconfig - will repair!
	I0229 02:31:25.578979  370051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:25.580805  370051 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:31:25.592154  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:25.592259  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:25.609268  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:26.092701  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.092827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.108636  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:24.491508  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.492827  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:28.496040  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.062093  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:26.062582  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:26.062612  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:26.062525  371066 retry.go:31] will retry after 2.231071357s: waiting for machine to come up
	I0229 02:31:28.295693  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:28.296180  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:28.296214  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:28.296116  371066 retry.go:31] will retry after 2.376277578s: waiting for machine to come up
	I0229 02:31:26.010834  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:28.031628  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.592320  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.592412  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.606907  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.092891  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.093028  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.112353  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.592956  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.593058  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.612315  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.092611  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.092729  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.108095  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.592592  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.592679  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.612145  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.092605  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.092720  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.113807  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.593002  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.593085  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.609337  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.092667  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.092757  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.112800  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.592328  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.592415  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.610909  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:31.092418  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.092529  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.109490  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.990551  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:32.990785  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:30.675432  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:30.675962  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:30.675995  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:30.675901  371066 retry.go:31] will retry after 4.442717853s: waiting for machine to come up
	I0229 02:31:30.511576  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:32.515611  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:35.010325  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:31.593046  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.593128  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.608148  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.092187  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.092299  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.107573  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.593184  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.593312  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.607993  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.092500  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.092603  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.107359  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.592987  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.593101  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.608041  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.092919  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.093023  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.107597  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.593200  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.593295  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.608100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.092589  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.092683  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.107100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.592815  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.592928  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.610879  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.610916  370051 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:35.610930  370051 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:35.610947  370051 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:35.611032  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:35.660059  370051 cri.go:89] found id: ""
	I0229 02:31:35.660146  370051 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:35.682067  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:35.694455  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:35.694542  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707118  370051 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707149  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:35.834811  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:35.123364  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.123906  369508 main.go:141] libmachine: (embed-certs-915633) Found IP for machine: 192.168.50.218
	I0229 02:31:35.123925  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has current primary IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.123931  369508 main.go:141] libmachine: (embed-certs-915633) Reserving static IP address...
	I0229 02:31:35.124398  369508 main.go:141] libmachine: (embed-certs-915633) Reserved static IP address: 192.168.50.218
	I0229 02:31:35.124423  369508 main.go:141] libmachine: (embed-certs-915633) Waiting for SSH to be available...
	I0229 02:31:35.124441  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "embed-certs-915633", mac: "52:54:00:26:ca:ce", ip: "192.168.50.218"} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.124468  369508 main.go:141] libmachine: (embed-certs-915633) DBG | skip adding static IP to network mk-embed-certs-915633 - found existing host DHCP lease matching {name: "embed-certs-915633", mac: "52:54:00:26:ca:ce", ip: "192.168.50.218"}
	I0229 02:31:35.124487  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Getting to WaitForSSH function...
	I0229 02:31:35.126676  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.127004  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.127035  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.127137  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Using SSH client type: external
	I0229 02:31:35.127168  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa (-rw-------)
	I0229 02:31:35.127199  369508 main.go:141] libmachine: (embed-certs-915633) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:31:35.127213  369508 main.go:141] libmachine: (embed-certs-915633) DBG | About to run SSH command:
	I0229 02:31:35.127224  369508 main.go:141] libmachine: (embed-certs-915633) DBG | exit 0
	I0229 02:31:35.251075  369508 main.go:141] libmachine: (embed-certs-915633) DBG | SSH cmd err, output: <nil>: 
	I0229 02:31:35.251474  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetConfigRaw
	I0229 02:31:35.252256  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:35.254934  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.255350  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.255378  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.255676  369508 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/config.json ...
	I0229 02:31:35.255881  369508 machine.go:88] provisioning docker machine ...
	I0229 02:31:35.255905  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:35.256154  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.256344  369508 buildroot.go:166] provisioning hostname "embed-certs-915633"
	I0229 02:31:35.256369  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.256506  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.258794  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.259163  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.259186  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.259337  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.259551  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.259716  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.259875  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.260066  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.260256  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.260269  369508 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-915633 && echo "embed-certs-915633" | sudo tee /etc/hostname
	I0229 02:31:35.383734  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-915633
	
	I0229 02:31:35.383770  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.386559  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.386913  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.386944  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.387121  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.387359  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.387631  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.387815  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.387979  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.388158  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.388175  369508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-915633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-915633/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-915633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:35.521449  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:35.521490  369508 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:31:35.521530  369508 buildroot.go:174] setting up certificates
	I0229 02:31:35.521544  369508 provision.go:83] configureAuth start
	I0229 02:31:35.521573  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.521923  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:35.524829  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.525193  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.525217  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.525348  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.527582  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.527980  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.528012  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.528164  369508 provision.go:138] copyHostCerts
	I0229 02:31:35.528216  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:31:35.528234  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:31:35.528290  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:31:35.528384  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:31:35.528396  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:31:35.528415  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:31:35.528514  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:31:35.528525  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:31:35.528544  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:31:35.528591  369508 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.embed-certs-915633 san=[192.168.50.218 192.168.50.218 localhost 127.0.0.1 minikube embed-certs-915633]
	I0229 02:31:35.778616  369508 provision.go:172] copyRemoteCerts
	I0229 02:31:35.778679  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:31:35.778706  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.782134  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.782605  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.782640  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.782833  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.783103  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.783305  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.783522  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:35.870506  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:31:35.904595  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:31:35.936515  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:31:35.966505  369508 provision.go:86] duration metric: configureAuth took 444.939951ms
	I0229 02:31:35.966539  369508 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:31:35.966725  369508 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:31:35.966831  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.969731  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.970133  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.970176  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.970402  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.970623  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.970788  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.970968  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.971139  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.971382  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.971401  369508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:31:36.262676  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:31:36.262719  369508 machine.go:91] provisioned docker machine in 1.00682197s
	I0229 02:31:36.262731  369508 start.go:300] post-start starting for "embed-certs-915633" (driver="kvm2")
	I0229 02:31:36.262743  369508 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:31:36.262765  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.263140  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:31:36.263179  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.265718  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.266095  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.266126  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.266278  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.266486  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.266658  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.266795  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.359474  369508 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:31:36.365071  369508 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:31:36.365110  369508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:31:36.365202  369508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:31:36.365279  369508 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:31:36.365395  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:31:36.376823  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:36.406525  369508 start.go:303] post-start completed in 143.75518ms
	I0229 02:31:36.406588  369508 fix.go:56] fixHost completed within 20.310442727s
	I0229 02:31:36.406619  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.409415  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.409840  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.409875  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.410009  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.410214  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.410412  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.410567  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.410715  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:36.410936  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:36.410950  369508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:31:36.520508  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173896.494400897
	
	I0229 02:31:36.520543  369508 fix.go:206] guest clock: 1709173896.494400897
	I0229 02:31:36.520555  369508 fix.go:219] Guest: 2024-02-29 02:31:36.494400897 +0000 UTC Remote: 2024-02-29 02:31:36.406594326 +0000 UTC m=+361.755087901 (delta=87.806571ms)
	I0229 02:31:36.520584  369508 fix.go:190] guest clock delta is within tolerance: 87.806571ms
	I0229 02:31:36.520597  369508 start.go:83] releasing machines lock for "embed-certs-915633", held for 20.424490067s
	I0229 02:31:36.520629  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.520949  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:36.523819  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.524146  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.524185  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.524359  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.524912  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.525109  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.525206  369508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:31:36.525251  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.525332  369508 ssh_runner.go:195] Run: cat /version.json
	I0229 02:31:36.525360  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.528265  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528470  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528614  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.528641  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528826  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.528829  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.528855  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.529047  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.529135  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.529253  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.529321  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.529414  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.529478  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.529556  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.611757  369508 ssh_runner.go:195] Run: systemctl --version
	I0229 02:31:36.638875  369508 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:31:36.786219  369508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:31:36.798964  369508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:31:36.799056  369508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:31:36.817942  369508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:31:36.817975  369508 start.go:475] detecting cgroup driver to use...
	I0229 02:31:36.818086  369508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:31:36.837019  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:31:36.855078  369508 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:31:36.855159  369508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:31:36.873444  369508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:31:36.891708  369508 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:31:37.031928  369508 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:31:37.212859  369508 docker.go:233] disabling docker service ...
	I0229 02:31:37.212960  369508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:31:37.235232  369508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:31:37.253901  369508 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:31:37.401366  369508 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:31:37.530791  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:31:37.547864  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:31:37.570344  369508 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:31:37.570416  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.582275  369508 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:31:37.582345  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.593628  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.605168  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.616567  369508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:31:37.628153  369508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:31:37.638579  369508 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:31:37.638640  369508 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:31:37.652738  369508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:31:37.664118  369508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:31:37.785330  369508 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:31:37.933006  369508 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:31:37.933095  369508 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:31:37.938625  369508 start.go:543] Will wait 60s for crictl version
	I0229 02:31:37.938702  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:31:37.943285  369508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:31:37.984992  369508 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:31:37.985105  369508 ssh_runner.go:195] Run: crio --version
	I0229 02:31:38.018467  369508 ssh_runner.go:195] Run: crio --version
	I0229 02:31:38.051472  369508 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 02:31:34.991345  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:36.991987  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:38.052850  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:38.055688  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:38.055970  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:38.056006  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:38.056253  369508 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 02:31:38.060925  369508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:38.076126  369508 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:31:38.076197  369508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:38.116261  369508 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 02:31:38.116372  369508 ssh_runner.go:195] Run: which lz4
	I0229 02:31:38.121080  369508 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:31:38.125711  369508 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:31:38.125755  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 02:31:37.012008  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:39.018348  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:36.790885  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.042778  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.130251  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.215289  370051 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:37.215384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:37.715589  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.715938  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.215781  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.716505  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.216238  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.716182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.992988  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:41.491712  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:43.492458  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:40.139859  369508 crio.go:444] Took 2.018817 seconds to copy over tarball
	I0229 02:31:40.139953  369508 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:31:43.071745  369508 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.931752333s)
	I0229 02:31:43.071797  369508 crio.go:451] Took 2.931905 seconds to extract the tarball
	I0229 02:31:43.071809  369508 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:31:43.118127  369508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:43.171147  369508 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:31:43.171176  369508 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:31:43.171262  369508 ssh_runner.go:195] Run: crio config
	I0229 02:31:43.232177  369508 cni.go:84] Creating CNI manager for ""
	I0229 02:31:43.232203  369508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:43.232229  369508 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:31:43.232247  369508 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-915633 NodeName:embed-certs-915633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:31:43.232419  369508 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-915633"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:31:43.232519  369508 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-915633 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-915633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:31:43.232596  369508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:31:43.244392  369508 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:31:43.244467  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:31:43.256293  369508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0229 02:31:43.275397  369508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:31:43.295494  369508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0229 02:31:43.316812  369508 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I0229 02:31:43.321496  369508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:43.335055  369508 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633 for IP: 192.168.50.218
	I0229 02:31:43.335092  369508 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:43.335270  369508 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:31:43.335316  369508 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:31:43.335388  369508 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/client.key
	I0229 02:31:43.335442  369508 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.key.cc0da009
	I0229 02:31:43.335475  369508 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.key
	I0229 02:31:43.335584  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:31:43.335610  369508 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:31:43.335619  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:31:43.335642  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:31:43.335673  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:31:43.335710  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:31:43.335779  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:43.336455  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:31:43.364985  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:31:43.394189  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:31:43.424515  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:31:43.456589  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:31:43.486396  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:31:43.516931  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:31:43.546421  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:31:43.578923  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:31:43.608333  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:31:43.637196  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:31:43.667522  369508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:31:43.688266  369508 ssh_runner.go:195] Run: openssl version
	I0229 02:31:43.695616  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:31:43.709892  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.715346  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.715426  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.722688  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:31:43.735866  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:31:43.749967  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.757599  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.757671  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.765157  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:31:43.779671  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:31:43.792900  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.798505  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.798576  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.805192  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:31:43.818233  369508 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:31:43.823681  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:31:43.831016  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:31:43.837899  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:31:43.844802  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:31:43.851881  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:31:43.858689  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:31:43.865749  369508 kubeadm.go:404] StartCluster: {Name:embed-certs-915633 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-915633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:43.865852  369508 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:31:43.865925  369508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:43.906012  369508 cri.go:89] found id: ""
	I0229 02:31:43.906116  369508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:31:43.918241  369508 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:31:43.918265  369508 kubeadm.go:636] restartCluster start
	I0229 02:31:43.918349  369508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:31:43.930524  369508 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:43.931550  369508 kubeconfig.go:92] found "embed-certs-915633" server: "https://192.168.50.218:8443"
	I0229 02:31:43.933612  369508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:31:43.944469  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:43.944519  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:43.958194  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:44.444746  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:44.444840  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:44.458567  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:41.510364  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:43.511424  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:41.216236  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:41.716082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.215537  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.715524  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.215873  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.715634  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.216464  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.715519  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.216430  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.716196  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.990995  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:48.489390  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:44.944934  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.003707  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.018797  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:45.445348  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.445435  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.460199  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:45.944750  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.944879  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.959309  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.445218  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:46.445313  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:46.459195  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.945456  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:46.945538  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:46.959212  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:47.444711  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:47.444819  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:47.459189  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:47.944651  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:47.944726  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:47.958733  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:48.445008  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:48.445100  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:48.460126  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:48.944649  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:48.944731  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:48.959993  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:49.444545  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:49.444628  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:49.458889  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.011657  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:48.508465  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:46.215715  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:46.715657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.216495  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.715491  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.215459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.715556  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.215675  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.716046  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.215993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.715594  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.489578  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:52.990638  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:49.945108  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:49.945265  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:49.960625  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:50.444843  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:50.444923  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:50.459329  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:50.944871  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:50.944963  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:50.959583  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:51.444601  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:51.444704  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:51.462037  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:51.944573  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:51.944658  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:51.958538  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:52.445111  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:52.445269  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:52.462902  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:52.945088  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:52.945182  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:52.960241  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.444649  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:53.444738  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:53.458642  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.945214  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:53.945291  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:53.960552  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.960588  369508 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:53.960600  369508 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:53.960615  369508 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:53.960671  369508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:54.005230  369508 cri.go:89] found id: ""
	I0229 02:31:54.005321  369508 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:54.027544  369508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:54.040517  369508 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:54.040577  369508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:54.051200  369508 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:54.051223  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:54.168817  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:50.509119  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:52.509526  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:54.511540  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:51.215927  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:51.715888  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.215659  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.715769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.216175  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.715755  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.216468  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.216280  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.715924  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.992721  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:57.490570  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:55.091652  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.346578  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.443373  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.542444  369508 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:55.542562  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.042870  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.542972  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.571776  369508 api_server.go:72] duration metric: took 1.029332492s to wait for apiserver process to appear ...
	I0229 02:31:56.571808  369508 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:56.571831  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:56.572606  369508 api_server.go:269] stopped: https://192.168.50.218:8443/healthz: Get "https://192.168.50.218:8443/healthz": dial tcp 192.168.50.218:8443: connect: connection refused
	I0229 02:31:57.072145  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.557011  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:59.557048  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:59.557066  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.609944  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:59.610010  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:59.610028  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.669911  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:59.669955  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:57.010655  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:59.510097  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:00.071971  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:00.084661  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:32:00.084690  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:32:00.572262  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:00.577772  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:32:00.577807  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:32:01.072371  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:01.077306  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0229 02:32:01.084492  369508 api_server.go:141] control plane version: v1.28.4
	I0229 02:32:01.084531  369508 api_server.go:131] duration metric: took 4.512702749s to wait for apiserver health ...
	I0229 02:32:01.084544  369508 cni.go:84] Creating CNI manager for ""
	I0229 02:32:01.084554  369508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:32:01.086337  369508 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:56.215653  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.715898  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.215954  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.216366  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.716093  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.215944  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.715553  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.216341  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.715677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.087584  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:32:01.099724  369508 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:32:01.122381  369508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:32:01.133632  369508 system_pods.go:59] 8 kube-system pods found
	I0229 02:32:01.133674  369508 system_pods.go:61] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:32:01.133684  369508 system_pods.go:61] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:32:01.133697  369508 system_pods.go:61] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:32:01.133710  369508 system_pods.go:61] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:32:01.133720  369508 system_pods.go:61] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:32:01.133728  369508 system_pods.go:61] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:32:01.133738  369508 system_pods.go:61] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:32:01.133746  369508 system_pods.go:61] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:32:01.133755  369508 system_pods.go:74] duration metric: took 11.346225ms to wait for pod list to return data ...
	I0229 02:32:01.133767  369508 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:32:01.138716  369508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:32:01.138746  369508 node_conditions.go:123] node cpu capacity is 2
	I0229 02:32:01.138760  369508 node_conditions.go:105] duration metric: took 4.985648ms to run NodePressure ...
	I0229 02:32:01.138783  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:32:01.368503  369508 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:32:01.373648  369508 kubeadm.go:787] kubelet initialised
	I0229 02:32:01.373669  369508 kubeadm.go:788] duration metric: took 5.137378ms waiting for restarted kubelet to initialise ...
	I0229 02:32:01.373677  369508 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:01.379649  369508 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.384724  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.384750  369508 pod_ready.go:81] duration metric: took 5.071017ms waiting for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.384758  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.384765  369508 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.390019  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "etcd-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.390048  369508 pod_ready.go:81] duration metric: took 5.27491ms waiting for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.390059  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "etcd-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.390067  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.396275  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.396294  369508 pod_ready.go:81] duration metric: took 6.218856ms waiting for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.396302  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.396307  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.525881  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.525914  369508 pod_ready.go:81] duration metric: took 129.596783ms waiting for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.525927  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.525935  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.926806  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-proxy-6tt7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.926843  369508 pod_ready.go:81] duration metric: took 400.889304ms waiting for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.926856  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-proxy-6tt7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.926864  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:02.326588  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.326621  369508 pod_ready.go:81] duration metric: took 399.74816ms waiting for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:02.326633  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.326639  369508 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:02.727730  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.727759  369508 pod_ready.go:81] duration metric: took 401.108694ms waiting for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:02.727769  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.727776  369508 pod_ready.go:38] duration metric: took 1.354090438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:02.727795  369508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:32:02.742069  369508 ops.go:34] apiserver oom_adj: -16
	I0229 02:32:02.742097  369508 kubeadm.go:640] restartCluster took 18.823823408s
	I0229 02:32:02.742107  369508 kubeadm.go:406] StartCluster complete in 18.876382148s
	I0229 02:32:02.742127  369508 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:02.742271  369508 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:32:02.744032  369508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:02.744292  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:32:02.744429  369508 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:32:02.744507  369508 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-915633"
	I0229 02:32:02.744526  369508 addons.go:69] Setting default-storageclass=true in profile "embed-certs-915633"
	I0229 02:32:02.744540  369508 addons.go:69] Setting metrics-server=true in profile "embed-certs-915633"
	I0229 02:32:02.744550  369508 addons.go:234] Setting addon metrics-server=true in "embed-certs-915633"
	I0229 02:32:02.744555  369508 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-915633"
	W0229 02:32:02.744558  369508 addons.go:243] addon metrics-server should already be in state true
	I0229 02:32:02.744619  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.744532  369508 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-915633"
	W0229 02:32:02.744735  369508 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:32:02.744853  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.744682  369508 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:32:02.745085  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745113  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.745121  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745175  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.745339  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745416  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.749865  369508 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-915633" context rescaled to 1 replicas
	I0229 02:32:02.749907  369508 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:32:02.751823  369508 out.go:177] * Verifying Kubernetes components...
	I0229 02:32:02.753296  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:32:02.762688  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0229 02:32:02.763050  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0229 02:32:02.763274  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.763693  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.763872  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.763895  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.763963  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0229 02:32:02.764307  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.764337  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.764554  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.764592  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.764665  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.765103  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.765135  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.765144  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.765170  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.765481  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.765495  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.765863  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.766129  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.769253  369508 addons.go:234] Setting addon default-storageclass=true in "embed-certs-915633"
	W0229 02:32:02.769274  369508 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:32:02.769295  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.769578  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.769607  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.787345  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35577
	I0229 02:32:02.787806  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.788243  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.788266  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.789755  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33629
	I0229 02:32:02.790272  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.790361  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0229 02:32:02.790634  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.790727  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.791027  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.791192  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.791206  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.791367  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.791402  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.791705  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.791924  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.792315  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.792987  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.793026  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.793278  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.795128  369508 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:32:02.794105  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.796451  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:32:02.796472  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:32:02.796496  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.797812  369508 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:59.493919  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:01.989683  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:02.799249  369508 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:32:02.799270  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:32:02.799289  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.800109  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.800960  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.801015  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.801300  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.801496  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.801635  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.801763  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.802278  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.802617  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.802645  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.802836  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.803026  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.803174  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.803390  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.818656  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I0229 02:32:02.819105  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.819606  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.819625  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.820022  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.820366  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.822054  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.822412  369508 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:32:02.822432  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:32:02.822451  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.825579  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.826260  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.826293  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.826463  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.826614  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.826761  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.826954  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.911316  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:32:02.945655  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:32:02.945683  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:32:02.981318  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:32:02.981352  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:32:02.983632  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:32:03.009561  369508 node_ready.go:35] waiting up to 6m0s for node "embed-certs-915633" to be "Ready" ...
	I0229 02:32:03.009586  369508 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:32:03.044265  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:32:03.044293  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:32:03.094073  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:32:04.287008  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.3033415s)
	I0229 02:32:04.287081  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287094  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287375  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.37602435s)
	I0229 02:32:04.287416  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287428  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287440  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287463  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287478  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287487  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287750  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287800  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287828  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287861  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287805  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287914  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287834  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.287774  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.289370  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.289377  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.289397  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.293892  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.293919  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.294180  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.294198  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.294212  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.376595  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.28244915s)
	I0229 02:32:04.376679  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.376710  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.377004  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.377022  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.377031  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.377039  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.377037  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.377275  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.377319  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.377331  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.377348  369508 addons.go:470] Verifying addon metrics-server=true in "embed-certs-915633"
	I0229 02:32:04.380194  369508 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:32:04.381510  369508 addons.go:505] enable addons completed in 1.637082823s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:32:02.010578  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:04.509975  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:01.216197  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.716302  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.216170  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.715615  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.216580  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.716088  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.215743  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.716142  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.216543  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.715853  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.991440  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:05.992389  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:08.491225  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:05.014879  369508 node_ready.go:58] node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:07.518854  369508 node_ready.go:58] node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:07.009085  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:09.009296  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:06.216206  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:06.715748  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.215964  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.716419  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.216034  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.715611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.216207  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.716408  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.216144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.716454  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.491751  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:12.991326  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:10.013574  369508 node_ready.go:49] node "embed-certs-915633" has status "Ready":"True"
	I0229 02:32:10.013605  369508 node_ready.go:38] duration metric: took 7.004009102s waiting for node "embed-certs-915633" to be "Ready" ...
	I0229 02:32:10.013617  369508 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:10.020332  369508 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.025740  369508 pod_ready.go:92] pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:10.025766  369508 pod_ready.go:81] duration metric: took 5.403764ms waiting for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.025778  369508 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.534182  369508 pod_ready.go:92] pod "etcd-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:10.534212  369508 pod_ready.go:81] duration metric: took 508.426559ms waiting for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.534238  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:11.048997  369508 pod_ready.go:92] pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:11.049027  369508 pod_ready.go:81] duration metric: took 514.780048ms waiting for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:11.049040  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:13.056477  369508 pod_ready.go:102] pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:11.010305  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:13.011477  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:11.215611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:11.716198  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.216332  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.716413  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.216407  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.716466  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.216182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.716285  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.215995  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.715613  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.491511  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:17.494485  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:15.056064  369508 pod_ready.go:92] pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.056093  369508 pod_ready.go:81] duration metric: took 4.007044542s waiting for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.056104  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.061418  369508 pod_ready.go:92] pod "kube-proxy-6tt7l" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.061440  369508 pod_ready.go:81] duration metric: took 5.329971ms waiting for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.061451  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.578305  369508 pod_ready.go:92] pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.578332  369508 pod_ready.go:81] duration metric: took 516.873281ms waiting for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.578341  369508 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:17.585624  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:19.586470  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:15.510630  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:18.010381  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:16.215530  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:16.716420  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.216031  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.716303  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.216082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.715523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.216166  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.716503  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.215680  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.715770  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.989766  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.989821  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.586820  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:23.587119  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:20.509895  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:23.010371  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.215523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:21.715617  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.216133  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.716029  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.216141  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.715578  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.215640  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.715601  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.215959  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.716394  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.990493  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:25.990911  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:28.489681  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:26.085933  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:28.086754  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:25.508765  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:27.508956  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:29.512409  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:26.215946  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:26.715834  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.216243  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.715581  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.215521  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.715849  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.716497  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.215657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.715492  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.490400  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:32.990250  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:30.586107  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:33.086852  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:31.518170  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:34.009514  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:31.216322  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:31.716160  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.215557  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.715618  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.215761  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.716216  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.216460  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.716244  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.215551  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.715633  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.990305  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.990956  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:35.585472  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:37.586652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.509112  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:38.509634  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.215910  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:36.716307  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:37.216308  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:37.216404  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:37.262324  370051 cri.go:89] found id: ""
	I0229 02:32:37.262358  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.262370  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:37.262378  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:37.262442  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:37.303758  370051 cri.go:89] found id: ""
	I0229 02:32:37.303790  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.303802  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:37.303809  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:37.303880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:37.349512  370051 cri.go:89] found id: ""
	I0229 02:32:37.349538  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.349546  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:37.349553  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:37.349607  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:37.389630  370051 cri.go:89] found id: ""
	I0229 02:32:37.389657  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.389668  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:37.389676  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:37.389752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:37.435918  370051 cri.go:89] found id: ""
	I0229 02:32:37.435954  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.435967  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:37.435976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:37.436044  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:37.479336  370051 cri.go:89] found id: ""
	I0229 02:32:37.479369  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.479377  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:37.479384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:37.479460  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:37.519944  370051 cri.go:89] found id: ""
	I0229 02:32:37.519979  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.519991  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:37.519999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:37.520071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:37.563848  370051 cri.go:89] found id: ""
	I0229 02:32:37.563875  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.563884  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:37.563895  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:37.563915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:37.607989  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:37.608025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:37.660272  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:37.660324  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:37.676878  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:37.676909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:37.805099  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:37.805132  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:37.805159  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.378467  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:40.393066  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:40.393221  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:40.432592  370051 cri.go:89] found id: ""
	I0229 02:32:40.432619  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.432628  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:40.432634  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:40.432693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:40.473651  370051 cri.go:89] found id: ""
	I0229 02:32:40.473706  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.473716  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:40.473722  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:40.473781  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:40.520262  370051 cri.go:89] found id: ""
	I0229 02:32:40.520292  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.520303  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:40.520312  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:40.520374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:40.560359  370051 cri.go:89] found id: ""
	I0229 02:32:40.560393  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.560402  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:40.560408  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:40.560474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:40.602145  370051 cri.go:89] found id: ""
	I0229 02:32:40.602173  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.602181  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:40.602187  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:40.602266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:40.640744  370051 cri.go:89] found id: ""
	I0229 02:32:40.640778  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.640791  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:40.640799  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:40.640869  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:40.681863  370051 cri.go:89] found id: ""
	I0229 02:32:40.681895  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.681908  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:40.681916  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:40.681985  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:40.725859  370051 cri.go:89] found id: ""
	I0229 02:32:40.725890  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.725899  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:40.725910  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:40.725924  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.794666  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:40.794705  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:40.854173  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:40.854215  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:40.901744  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:40.901786  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:40.925331  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:40.925371  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:41.005785  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:39.491292  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:41.494077  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:40.086540  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:42.584644  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:44.587012  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:41.010764  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:43.510128  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:43.506756  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:43.522038  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:43.522135  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:43.559609  370051 cri.go:89] found id: ""
	I0229 02:32:43.559635  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.559642  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:43.559649  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:43.559707  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:43.609059  370051 cri.go:89] found id: ""
	I0229 02:32:43.609087  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.609096  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:43.609102  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:43.609159  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:43.648988  370051 cri.go:89] found id: ""
	I0229 02:32:43.649018  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.649029  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:43.649037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:43.649104  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:43.690995  370051 cri.go:89] found id: ""
	I0229 02:32:43.691028  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.691042  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:43.691054  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:43.691120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:43.729221  370051 cri.go:89] found id: ""
	I0229 02:32:43.729249  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.729257  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:43.729263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:43.729334  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:43.767141  370051 cri.go:89] found id: ""
	I0229 02:32:43.767174  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.767186  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:43.767194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:43.767266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:43.807926  370051 cri.go:89] found id: ""
	I0229 02:32:43.807962  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.807970  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:43.807976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:43.808029  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:43.857945  370051 cri.go:89] found id: ""
	I0229 02:32:43.857973  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.857981  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:43.857991  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:43.858005  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:43.941290  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:43.941338  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:43.986788  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:43.986823  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:44.037384  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:44.037421  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:44.052668  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:44.052696  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:44.127124  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:43.990179  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:45.990921  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:47.991525  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:47.086821  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:49.585987  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:45.510273  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:48.009067  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:50.011776  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:46.627409  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:46.642306  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:46.642397  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:46.685358  370051 cri.go:89] found id: ""
	I0229 02:32:46.685389  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.685400  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:46.685431  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:46.685493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:46.724996  370051 cri.go:89] found id: ""
	I0229 02:32:46.725026  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.725035  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:46.725041  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:46.725113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:46.765815  370051 cri.go:89] found id: ""
	I0229 02:32:46.765849  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.765857  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:46.765863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:46.765924  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:46.808946  370051 cri.go:89] found id: ""
	I0229 02:32:46.808980  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.808991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:46.809000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:46.809068  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:46.865068  370051 cri.go:89] found id: ""
	I0229 02:32:46.865106  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.865119  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:46.865127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:46.865200  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:46.932233  370051 cri.go:89] found id: ""
	I0229 02:32:46.932260  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.932268  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:46.932275  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:46.932331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:46.985701  370051 cri.go:89] found id: ""
	I0229 02:32:46.985732  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.985744  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:46.985752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:46.985819  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:47.027497  370051 cri.go:89] found id: ""
	I0229 02:32:47.027524  370051 logs.go:276] 0 containers: []
	W0229 02:32:47.027536  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:47.027548  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:47.027565  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:47.075955  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:47.075990  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:47.093922  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:47.093949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:47.165000  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:47.165029  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:47.165046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:47.250161  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:47.250201  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:49.794654  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:49.809706  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:49.809787  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:49.868163  370051 cri.go:89] found id: ""
	I0229 02:32:49.868197  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.868217  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:49.868223  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:49.868277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:49.928462  370051 cri.go:89] found id: ""
	I0229 02:32:49.928495  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.928508  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:49.928516  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:49.928580  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:49.975725  370051 cri.go:89] found id: ""
	I0229 02:32:49.975755  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.975765  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:49.975774  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:49.975849  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:50.017007  370051 cri.go:89] found id: ""
	I0229 02:32:50.017036  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.017046  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:50.017051  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:50.017118  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:50.054522  370051 cri.go:89] found id: ""
	I0229 02:32:50.054551  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.054560  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:50.054566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:50.054620  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:50.096274  370051 cri.go:89] found id: ""
	I0229 02:32:50.096300  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.096308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:50.096319  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:50.096382  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:50.142543  370051 cri.go:89] found id: ""
	I0229 02:32:50.142581  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.142590  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:50.142597  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:50.142667  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:50.182452  370051 cri.go:89] found id: ""
	I0229 02:32:50.182482  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.182492  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:50.182505  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:50.182522  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:50.266311  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:50.266355  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:50.309277  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:50.309322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:50.360492  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:50.360536  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:50.376711  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:50.376744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:50.447128  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:49.992032  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.490801  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:51.586053  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:53.586268  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.510054  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:54.510975  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.947926  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:52.970209  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:52.970317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:53.010840  370051 cri.go:89] found id: ""
	I0229 02:32:53.010868  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.010878  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:53.010886  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:53.010983  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:53.049458  370051 cri.go:89] found id: ""
	I0229 02:32:53.049490  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.049503  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:53.049511  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:53.049578  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:53.088615  370051 cri.go:89] found id: ""
	I0229 02:32:53.088646  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.088656  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:53.088671  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:53.088738  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:53.130176  370051 cri.go:89] found id: ""
	I0229 02:32:53.130210  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.130237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:53.130247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:53.130317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:53.177876  370051 cri.go:89] found id: ""
	I0229 02:32:53.177908  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.177920  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:53.177928  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:53.177991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:53.216036  370051 cri.go:89] found id: ""
	I0229 02:32:53.216065  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.216074  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:53.216080  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:53.216143  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:53.254673  370051 cri.go:89] found id: ""
	I0229 02:32:53.254705  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.254716  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:53.254724  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:53.254785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:53.291508  370051 cri.go:89] found id: ""
	I0229 02:32:53.291539  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.291551  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:53.291564  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:53.291581  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:53.343312  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:53.343354  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:53.359264  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:53.359294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:53.431396  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:53.431428  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:53.431445  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:53.512494  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:53.512529  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:56.057340  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:56.073074  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:56.073158  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:56.111650  370051 cri.go:89] found id: ""
	I0229 02:32:56.111684  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.111704  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:56.111713  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:56.111785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:54.990490  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:56.991005  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:55.587290  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:58.086312  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:57.008288  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:59.011396  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:56.150147  370051 cri.go:89] found id: ""
	I0229 02:32:56.150178  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.150191  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:56.150200  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:56.150280  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:56.192842  370051 cri.go:89] found id: ""
	I0229 02:32:56.192878  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.192890  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:56.192898  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:56.192969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:56.232013  370051 cri.go:89] found id: ""
	I0229 02:32:56.232051  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.232062  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:56.232079  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:56.232151  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:56.273824  370051 cri.go:89] found id: ""
	I0229 02:32:56.273858  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.273871  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:56.273882  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:56.273949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:56.312112  370051 cri.go:89] found id: ""
	I0229 02:32:56.312139  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.312147  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:56.312153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:56.312203  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:56.352558  370051 cri.go:89] found id: ""
	I0229 02:32:56.352585  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.352593  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:56.352600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:56.352666  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:56.397719  370051 cri.go:89] found id: ""
	I0229 02:32:56.397762  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.397775  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:56.397790  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:56.397808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:56.447793  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:56.447831  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:56.463859  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:56.463894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:56.540306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:56.540333  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:56.540347  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:56.633201  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:56.633247  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.207459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:59.222165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:59.222271  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:59.261197  370051 cri.go:89] found id: ""
	I0229 02:32:59.261230  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.261242  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:59.261251  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:59.261338  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:59.300874  370051 cri.go:89] found id: ""
	I0229 02:32:59.300917  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.300940  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:59.300950  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:59.301025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:59.345399  370051 cri.go:89] found id: ""
	I0229 02:32:59.345435  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.345446  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:59.345455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:59.345525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:59.386068  370051 cri.go:89] found id: ""
	I0229 02:32:59.386102  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.386112  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:59.386132  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:59.386184  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:59.436597  370051 cri.go:89] found id: ""
	I0229 02:32:59.436629  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.436641  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:59.436649  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:59.436708  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:59.481417  370051 cri.go:89] found id: ""
	I0229 02:32:59.481446  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.481462  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:59.481469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:59.481535  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:59.527725  370051 cri.go:89] found id: ""
	I0229 02:32:59.527752  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.527763  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:59.527771  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:59.527845  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:59.574502  370051 cri.go:89] found id: ""
	I0229 02:32:59.574535  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.574547  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:59.574561  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:59.574579  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:59.669584  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:59.669630  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.730049  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:59.730096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:59.779562  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:59.779613  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:59.797016  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:59.797046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:59.876438  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:58.991584  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:01.489321  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:03.489615  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:00.585463  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:02.587986  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:04.588479  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:01.509980  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:04.009579  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:02.377144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:02.391585  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:02.391682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:02.432359  370051 cri.go:89] found id: ""
	I0229 02:33:02.432390  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.432399  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:02.432406  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:02.432462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:02.476733  370051 cri.go:89] found id: ""
	I0229 02:33:02.476768  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.476781  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:02.476790  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:02.476856  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:02.521414  370051 cri.go:89] found id: ""
	I0229 02:33:02.521440  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.521448  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:02.521454  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:02.521513  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:02.561663  370051 cri.go:89] found id: ""
	I0229 02:33:02.561690  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.561698  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:02.561704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:02.561755  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:02.611953  370051 cri.go:89] found id: ""
	I0229 02:33:02.611989  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.612002  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:02.612010  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:02.612079  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:02.663254  370051 cri.go:89] found id: ""
	I0229 02:33:02.663282  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.663290  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:02.663297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:02.663348  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:02.721449  370051 cri.go:89] found id: ""
	I0229 02:33:02.721484  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.721497  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:02.721506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:02.721579  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:02.761197  370051 cri.go:89] found id: ""
	I0229 02:33:02.761239  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.761251  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:02.761265  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:02.761282  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:02.810457  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:02.810498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:02.828906  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:02.828940  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:02.911895  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:02.911932  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:02.911945  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:02.995120  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:02.995152  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:05.544629  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:05.559266  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:05.559342  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:05.609673  370051 cri.go:89] found id: ""
	I0229 02:33:05.609706  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.609718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:05.609727  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:05.609795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:05.665161  370051 cri.go:89] found id: ""
	I0229 02:33:05.665192  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.665203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:05.665211  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:05.665282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:05.719923  370051 cri.go:89] found id: ""
	I0229 02:33:05.719949  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.719957  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:05.719963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:05.720025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:05.765189  370051 cri.go:89] found id: ""
	I0229 02:33:05.765224  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.765237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:05.765245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:05.765357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:05.803788  370051 cri.go:89] found id: ""
	I0229 02:33:05.803820  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.803829  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:05.803836  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:05.803909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:05.842152  370051 cri.go:89] found id: ""
	I0229 02:33:05.842178  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.842188  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:05.842197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:05.842278  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:05.885042  370051 cri.go:89] found id: ""
	I0229 02:33:05.885071  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.885084  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:05.885092  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:05.885156  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:05.926032  370051 cri.go:89] found id: ""
	I0229 02:33:05.926069  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.926082  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:05.926096  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:05.926112  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:06.014702  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:06.014744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:06.063510  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:06.063550  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:06.114215  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:06.114272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:06.130132  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:06.130169  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:33:05.490726  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:07.491068  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:07.085225  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:09.087524  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:06.508469  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:08.509399  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	W0229 02:33:06.205692  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:08.706549  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:08.722548  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:08.722614  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:08.768518  370051 cri.go:89] found id: ""
	I0229 02:33:08.768553  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.768564  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:08.768572  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:08.768630  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:08.804600  370051 cri.go:89] found id: ""
	I0229 02:33:08.804630  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.804643  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:08.804651  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:08.804721  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:08.842466  370051 cri.go:89] found id: ""
	I0229 02:33:08.842497  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.842510  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:08.842518  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:08.842589  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:08.878384  370051 cri.go:89] found id: ""
	I0229 02:33:08.878412  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.878421  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:08.878427  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:08.878484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:08.924228  370051 cri.go:89] found id: ""
	I0229 02:33:08.924262  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.924275  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:08.924295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:08.924374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:08.966122  370051 cri.go:89] found id: ""
	I0229 02:33:08.966157  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.966168  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:08.966177  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:08.966254  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:09.011109  370051 cri.go:89] found id: ""
	I0229 02:33:09.011135  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.011144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:09.011152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:09.011217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:09.059716  370051 cri.go:89] found id: ""
	I0229 02:33:09.059749  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.059782  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:09.059795  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:09.059812  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:09.110564  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:09.110599  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:09.126037  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:09.126065  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:09.199827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:09.199858  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:09.199892  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:09.282624  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:09.282661  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:09.990502  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.991783  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.586475  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:13.586740  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:10.511051  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:12.512644  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:15.009478  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.829017  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:11.842826  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:11.842894  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:11.881652  370051 cri.go:89] found id: ""
	I0229 02:33:11.881689  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.881700  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:11.881709  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:11.881773  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:11.919252  370051 cri.go:89] found id: ""
	I0229 02:33:11.919291  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.919302  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:11.919309  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:11.919380  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:11.959145  370051 cri.go:89] found id: ""
	I0229 02:33:11.959175  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.959187  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:11.959196  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:11.959263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:12.002105  370051 cri.go:89] found id: ""
	I0229 02:33:12.002134  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.002145  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:12.002153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:12.002219  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:12.042157  370051 cri.go:89] found id: ""
	I0229 02:33:12.042188  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.042221  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:12.042249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:12.042326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:12.080121  370051 cri.go:89] found id: ""
	I0229 02:33:12.080150  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.080158  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:12.080165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:12.080231  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:12.119259  370051 cri.go:89] found id: ""
	I0229 02:33:12.119286  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.119294  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:12.119301  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:12.119357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:12.160136  370051 cri.go:89] found id: ""
	I0229 02:33:12.160171  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.160182  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:12.160195  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:12.160209  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:12.209770  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:12.209810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:12.226429  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:12.226460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:12.295933  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:12.295966  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:12.295978  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:12.380794  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:12.380843  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:14.971692  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:14.986085  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:14.986162  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:15.024756  370051 cri.go:89] found id: ""
	I0229 02:33:15.024788  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.024801  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:15.024809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:15.024868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:15.065131  370051 cri.go:89] found id: ""
	I0229 02:33:15.065159  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.065172  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:15.065180  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:15.065251  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:15.104744  370051 cri.go:89] found id: ""
	I0229 02:33:15.104775  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.104786  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:15.104794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:15.104858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:15.145710  370051 cri.go:89] found id: ""
	I0229 02:33:15.145737  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.145745  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:15.145752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:15.145803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:15.184908  370051 cri.go:89] found id: ""
	I0229 02:33:15.184933  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.184942  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:15.184951  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:15.185016  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:15.230195  370051 cri.go:89] found id: ""
	I0229 02:33:15.230220  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.230241  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:15.230249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:15.230326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:15.269750  370051 cri.go:89] found id: ""
	I0229 02:33:15.269774  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.269783  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:15.269789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:15.269852  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:15.312331  370051 cri.go:89] found id: ""
	I0229 02:33:15.312360  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.312373  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:15.312387  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:15.312402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:15.363032  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:15.363067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:15.422421  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:15.422463  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:15.445235  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:15.445272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:15.530010  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:15.530047  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:15.530066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:14.489188  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:16.991028  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:16.090733  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:18.587045  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:17.510766  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:20.009379  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:18.116265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:18.130375  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:18.130439  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:18.167740  370051 cri.go:89] found id: ""
	I0229 02:33:18.167767  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.167776  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:18.167782  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:18.167843  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:18.205621  370051 cri.go:89] found id: ""
	I0229 02:33:18.205653  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.205662  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:18.205670  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:18.205725  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:18.246917  370051 cri.go:89] found id: ""
	I0229 02:33:18.246954  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.246975  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:18.246983  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:18.247040  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:18.285087  370051 cri.go:89] found id: ""
	I0229 02:33:18.285114  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.285123  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:18.285130  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:18.285181  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:18.323989  370051 cri.go:89] found id: ""
	I0229 02:33:18.324018  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.324027  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:18.324033  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:18.324094  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:18.372741  370051 cri.go:89] found id: ""
	I0229 02:33:18.372769  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.372779  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:18.372785  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:18.372838  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:18.432846  370051 cri.go:89] found id: ""
	I0229 02:33:18.432888  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.432900  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:18.432908  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:18.432977  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:18.486357  370051 cri.go:89] found id: ""
	I0229 02:33:18.486387  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.486399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:18.486411  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:18.486431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:18.532363  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:18.532402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:18.582035  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:18.582076  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:18.599009  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:18.599050  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:18.673580  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:18.673609  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:18.673625  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:19.490704  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.990251  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.085541  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:23.086148  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:22.009826  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:24.509388  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.259614  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:21.274150  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:21.274250  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:21.311859  370051 cri.go:89] found id: ""
	I0229 02:33:21.311895  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.311908  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:21.311917  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:21.311984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:21.364260  370051 cri.go:89] found id: ""
	I0229 02:33:21.364296  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.364309  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:21.364317  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:21.364391  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:21.424181  370051 cri.go:89] found id: ""
	I0229 02:33:21.424217  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.424229  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:21.424237  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:21.424306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:21.482499  370051 cri.go:89] found id: ""
	I0229 02:33:21.482531  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.482543  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:21.482551  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:21.482621  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:21.523743  370051 cri.go:89] found id: ""
	I0229 02:33:21.523775  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.523785  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:21.523793  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:21.523868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:21.563759  370051 cri.go:89] found id: ""
	I0229 02:33:21.563789  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.563800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:21.563809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:21.563889  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:21.610162  370051 cri.go:89] found id: ""
	I0229 02:33:21.610265  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.610286  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:21.610295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:21.610378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:21.652001  370051 cri.go:89] found id: ""
	I0229 02:33:21.652028  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.652037  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:21.652047  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:21.652060  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:21.704028  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:21.704067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:21.720924  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:21.720956  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:21.798619  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:21.798645  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:21.798664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:21.888445  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:21.888506  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.437647  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:24.459963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:24.460041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:24.503906  370051 cri.go:89] found id: ""
	I0229 02:33:24.503940  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.503950  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:24.503956  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:24.504031  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:24.541893  370051 cri.go:89] found id: ""
	I0229 02:33:24.541919  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.541929  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:24.541935  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:24.541991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:24.584717  370051 cri.go:89] found id: ""
	I0229 02:33:24.584748  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.584760  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:24.584769  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:24.584836  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:24.623334  370051 cri.go:89] found id: ""
	I0229 02:33:24.623362  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.623371  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:24.623378  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:24.623447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:24.665862  370051 cri.go:89] found id: ""
	I0229 02:33:24.665890  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.665902  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:24.665911  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:24.665984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:24.705509  370051 cri.go:89] found id: ""
	I0229 02:33:24.705540  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.705551  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:24.705560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:24.705634  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:24.745348  370051 cri.go:89] found id: ""
	I0229 02:33:24.745389  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.745399  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:24.745406  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:24.745462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:24.785490  370051 cri.go:89] found id: ""
	I0229 02:33:24.785520  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.785529  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:24.785539  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:24.785553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.829556  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:24.829589  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:24.877914  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:24.877949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:24.894590  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:24.894623  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:24.972948  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:24.972981  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:24.972997  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:23.990806  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:26.489823  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:25.586684  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:27.588321  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:26.509932  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:29.010692  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:27.555364  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:27.570747  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:27.570820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:27.609771  370051 cri.go:89] found id: ""
	I0229 02:33:27.609800  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.609807  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:27.609813  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:27.609863  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:27.654316  370051 cri.go:89] found id: ""
	I0229 02:33:27.654347  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.654360  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:27.654376  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:27.654453  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:27.695089  370051 cri.go:89] found id: ""
	I0229 02:33:27.695125  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.695137  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:27.695143  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:27.695199  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:27.733846  370051 cri.go:89] found id: ""
	I0229 02:33:27.733881  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.733893  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:27.733901  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:27.733972  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:27.772906  370051 cri.go:89] found id: ""
	I0229 02:33:27.772940  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.772953  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:27.772961  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:27.773039  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:27.812266  370051 cri.go:89] found id: ""
	I0229 02:33:27.812295  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.812308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:27.812316  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:27.812387  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:27.849272  370051 cri.go:89] found id: ""
	I0229 02:33:27.849305  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.849316  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:27.849324  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:27.849393  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:27.887495  370051 cri.go:89] found id: ""
	I0229 02:33:27.887528  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.887541  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:27.887554  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:27.887569  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:27.972220  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:27.972261  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:28.020757  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:28.020797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:28.070347  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:28.070381  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:28.089905  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:28.089947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:28.183306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:30.683857  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:30.701341  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:30.701443  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:30.741342  370051 cri.go:89] found id: ""
	I0229 02:33:30.741376  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.741387  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:30.741397  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:30.741475  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:30.785372  370051 cri.go:89] found id: ""
	I0229 02:33:30.785415  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.785427  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:30.785435  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:30.785506  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:30.828402  370051 cri.go:89] found id: ""
	I0229 02:33:30.828428  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.828436  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:30.828442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:30.828504  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:30.872656  370051 cri.go:89] found id: ""
	I0229 02:33:30.872684  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.872695  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:30.872704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:30.872770  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:30.918746  370051 cri.go:89] found id: ""
	I0229 02:33:30.918775  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.918786  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:30.918794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:30.918867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:30.956794  370051 cri.go:89] found id: ""
	I0229 02:33:30.956838  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.956852  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:30.956860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:30.956935  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:31.000595  370051 cri.go:89] found id: ""
	I0229 02:33:31.000618  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.000628  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:31.000637  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:31.000699  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:31.039060  370051 cri.go:89] found id: ""
	I0229 02:33:31.039089  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.039100  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:31.039111  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:31.039133  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:31.089919  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:31.089949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:31.110276  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:31.110315  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:33:28.990807  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:30.993882  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:33.489703  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:30.086658  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:32.586407  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:34.588272  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:31.509534  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:33.511710  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	W0229 02:33:31.235760  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:31.235791  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:31.235810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:31.323257  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:31.323322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:33.872956  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:33.887953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:33.888034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:33.927887  370051 cri.go:89] found id: ""
	I0229 02:33:33.927926  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.927938  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:33.927945  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:33.928001  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:33.967301  370051 cri.go:89] found id: ""
	I0229 02:33:33.967333  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.967345  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:33.967356  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:33.967425  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:34.009949  370051 cri.go:89] found id: ""
	I0229 02:33:34.009982  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.009992  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:34.009999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:34.010073  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:34.056197  370051 cri.go:89] found id: ""
	I0229 02:33:34.056224  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.056232  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:34.056239  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:34.056314  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:34.107089  370051 cri.go:89] found id: ""
	I0229 02:33:34.107120  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.107132  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:34.107140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:34.107206  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:34.162822  370051 cri.go:89] found id: ""
	I0229 02:33:34.162856  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.162875  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:34.162884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:34.162961  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:34.209963  370051 cri.go:89] found id: ""
	I0229 02:33:34.209993  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.210001  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:34.210008  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:34.210078  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:34.250688  370051 cri.go:89] found id: ""
	I0229 02:33:34.250726  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.250735  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:34.250754  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:34.250768  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:34.298953  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:34.298993  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:34.314067  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:34.314100  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:34.393515  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:34.393536  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:34.393551  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:34.477034  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:34.477078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:35.990175  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:38.490651  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:37.087261  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:39.588400  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:36.009933  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:38.508929  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:37.025152  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:37.040410  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:37.040491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:37.077922  370051 cri.go:89] found id: ""
	I0229 02:33:37.077953  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.077965  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:37.077973  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:37.078041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:37.137895  370051 cri.go:89] found id: ""
	I0229 02:33:37.137925  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.137938  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:37.137946  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:37.138012  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:37.199291  370051 cri.go:89] found id: ""
	I0229 02:33:37.199324  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.199336  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:37.199344  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:37.199422  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:37.242817  370051 cri.go:89] found id: ""
	I0229 02:33:37.242848  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.242857  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:37.242863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:37.242917  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:37.282171  370051 cri.go:89] found id: ""
	I0229 02:33:37.282196  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.282204  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:37.282211  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:37.282284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:37.328608  370051 cri.go:89] found id: ""
	I0229 02:33:37.328639  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.328647  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:37.328658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:37.328724  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:37.372965  370051 cri.go:89] found id: ""
	I0229 02:33:37.372996  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.373008  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:37.373016  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:37.373091  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:37.417597  370051 cri.go:89] found id: ""
	I0229 02:33:37.417630  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.417642  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:37.417655  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:37.417673  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:37.472023  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:37.472058  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:37.487931  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:37.487961  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:37.568196  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:37.568227  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:37.568245  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:37.658485  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:37.658523  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.203039  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:40.220385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:40.220477  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:40.262962  370051 cri.go:89] found id: ""
	I0229 02:33:40.262993  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.263004  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:40.263016  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:40.263086  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:40.302452  370051 cri.go:89] found id: ""
	I0229 02:33:40.302483  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.302495  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:40.302503  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:40.302560  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:40.342509  370051 cri.go:89] found id: ""
	I0229 02:33:40.342544  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.342557  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:40.342566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:40.342644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:40.385585  370051 cri.go:89] found id: ""
	I0229 02:33:40.385615  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.385629  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:40.385638  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:40.385703  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:40.426839  370051 cri.go:89] found id: ""
	I0229 02:33:40.426874  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.426887  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:40.426896  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:40.426962  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:40.467217  370051 cri.go:89] found id: ""
	I0229 02:33:40.467241  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.467251  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:40.467257  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:40.467332  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:40.513525  370051 cri.go:89] found id: ""
	I0229 02:33:40.513546  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.513553  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:40.513559  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:40.513609  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:40.554187  370051 cri.go:89] found id: ""
	I0229 02:33:40.554256  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.554269  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:40.554282  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:40.554301  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:40.636447  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:40.636477  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:40.636494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:40.716381  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:40.716423  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.761946  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:40.761982  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:40.812828  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:40.812862  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:40.492178  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.991517  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.086413  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:44.586663  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:40.510266  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.510702  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:45.013362  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:43.336139  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:43.352278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:43.352361  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:43.392555  370051 cri.go:89] found id: ""
	I0229 02:33:43.392593  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.392607  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:43.392616  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:43.392689  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:43.438169  370051 cri.go:89] found id: ""
	I0229 02:33:43.438202  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.438216  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:43.438242  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:43.438331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:43.476987  370051 cri.go:89] found id: ""
	I0229 02:33:43.477021  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.477033  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:43.477042  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:43.477109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:43.526728  370051 cri.go:89] found id: ""
	I0229 02:33:43.526758  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.526767  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:43.526778  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:43.526833  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:43.572222  370051 cri.go:89] found id: ""
	I0229 02:33:43.572260  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.572273  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:43.572282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:43.572372  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:43.618650  370051 cri.go:89] found id: ""
	I0229 02:33:43.618679  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.618691  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:43.618698  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:43.618764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:43.658069  370051 cri.go:89] found id: ""
	I0229 02:33:43.658104  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.658116  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:43.658126  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:43.658196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:43.700790  370051 cri.go:89] found id: ""
	I0229 02:33:43.700829  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.700841  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:43.700855  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:43.700874  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:43.753330  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:43.753372  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:43.770261  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:43.770294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:43.842407  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:43.842430  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:43.842447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:43.935427  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:43.935470  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:45.490296  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.490514  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.088903  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:49.585902  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.510105  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:49.511420  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:46.498694  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:46.516463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:46.516541  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:46.554731  370051 cri.go:89] found id: ""
	I0229 02:33:46.554757  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.554766  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:46.554772  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:46.554835  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:46.596851  370051 cri.go:89] found id: ""
	I0229 02:33:46.596892  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.596905  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:46.596912  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:46.596981  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:46.634978  370051 cri.go:89] found id: ""
	I0229 02:33:46.635008  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.635017  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:46.635024  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:46.635089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:46.675302  370051 cri.go:89] found id: ""
	I0229 02:33:46.675334  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.675347  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:46.675355  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:46.675423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:46.717366  370051 cri.go:89] found id: ""
	I0229 02:33:46.717402  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.717413  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:46.717421  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:46.717484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:46.756130  370051 cri.go:89] found id: ""
	I0229 02:33:46.756160  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.756169  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:46.756176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:46.756228  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:46.794283  370051 cri.go:89] found id: ""
	I0229 02:33:46.794312  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.794320  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:46.794328  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:46.794384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:46.836646  370051 cri.go:89] found id: ""
	I0229 02:33:46.836679  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.836691  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:46.836703  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:46.836721  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:46.926532  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:46.926578  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:46.981883  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:46.981915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:47.033571  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:47.033612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:47.049803  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:47.049833  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:47.123389  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:49.623827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:49.638175  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:49.638263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:49.675895  370051 cri.go:89] found id: ""
	I0229 02:33:49.675929  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.675941  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:49.675950  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:49.676009  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:49.720679  370051 cri.go:89] found id: ""
	I0229 02:33:49.720718  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.720730  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:49.720739  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:49.720808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:49.762299  370051 cri.go:89] found id: ""
	I0229 02:33:49.762329  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.762342  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:49.762350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:49.762426  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:49.809330  370051 cri.go:89] found id: ""
	I0229 02:33:49.809364  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.809376  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:49.809391  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:49.809455  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:49.859176  370051 cri.go:89] found id: ""
	I0229 02:33:49.859206  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.859218  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:49.859226  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:49.859292  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:49.914844  370051 cri.go:89] found id: ""
	I0229 02:33:49.914877  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.914890  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:49.914897  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:49.914967  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:49.969640  370051 cri.go:89] found id: ""
	I0229 02:33:49.969667  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.969676  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:49.969682  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:49.969736  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:50.010924  370051 cri.go:89] found id: ""
	I0229 02:33:50.010953  370051 logs.go:276] 0 containers: []
	W0229 02:33:50.010965  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:50.010976  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:50.011002  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:50.089462  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:50.089494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:50.132098  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:50.132129  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:50.182693  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:50.182737  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:50.198209  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:50.198256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:50.281521  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:49.991831  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:52.489891  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:51.586298  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:53.587249  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:51.513176  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:54.010209  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:52.781677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:52.795962  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:52.796055  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:52.833670  370051 cri.go:89] found id: ""
	I0229 02:33:52.833706  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.833718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:52.833728  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:52.833795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:52.889497  370051 cri.go:89] found id: ""
	I0229 02:33:52.889529  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.889539  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:52.889547  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:52.889616  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:52.952880  370051 cri.go:89] found id: ""
	I0229 02:33:52.952915  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.952927  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:52.952935  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:52.953002  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:53.008380  370051 cri.go:89] found id: ""
	I0229 02:33:53.008409  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.008420  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:53.008434  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:53.008502  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:53.047877  370051 cri.go:89] found id: ""
	I0229 02:33:53.047911  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.047922  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:53.047931  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:53.047999  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:53.086080  370051 cri.go:89] found id: ""
	I0229 02:33:53.086107  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.086118  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:53.086127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:53.086193  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:53.128334  370051 cri.go:89] found id: ""
	I0229 02:33:53.128368  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.128378  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:53.128385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:53.128457  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:53.172201  370051 cri.go:89] found id: ""
	I0229 02:33:53.172232  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.172245  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:53.172258  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:53.172275  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:53.222608  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:53.222648  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:53.239888  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:53.239918  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:53.315827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:53.315850  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:53.315864  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:53.395457  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:53.395498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:55.943418  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:55.960562  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:55.960638  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:56.005181  370051 cri.go:89] found id: ""
	I0229 02:33:56.005210  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.005221  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:56.005229  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:56.005293  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:56.046700  370051 cri.go:89] found id: ""
	I0229 02:33:56.046731  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.046743  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:56.046750  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:56.046814  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:56.088459  370051 cri.go:89] found id: ""
	I0229 02:33:56.088486  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.088497  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:56.088505  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:56.088571  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:56.127729  370051 cri.go:89] found id: ""
	I0229 02:33:56.127762  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.127774  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:56.127783  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:56.127862  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:54.491536  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.493973  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.089188  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:58.586570  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.011539  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:58.509708  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.169980  370051 cri.go:89] found id: ""
	I0229 02:33:56.170011  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.170022  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:56.170030  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:56.170098  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:56.210650  370051 cri.go:89] found id: ""
	I0229 02:33:56.210682  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.210694  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:56.210704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:56.210771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:56.247342  370051 cri.go:89] found id: ""
	I0229 02:33:56.247380  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.247391  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:56.247400  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:56.247474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:56.286322  370051 cri.go:89] found id: ""
	I0229 02:33:56.286353  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.286364  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:56.286375  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:56.286393  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:56.335144  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:56.335184  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:56.351322  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:56.351359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:56.424251  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:56.424282  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:56.424299  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:56.506053  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:56.506082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.052805  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:59.067508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:59.067599  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:59.114213  370051 cri.go:89] found id: ""
	I0229 02:33:59.114256  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.114268  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:59.114276  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:59.114327  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:59.161087  370051 cri.go:89] found id: ""
	I0229 02:33:59.161123  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.161136  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:59.161145  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:59.161217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:59.206071  370051 cri.go:89] found id: ""
	I0229 02:33:59.206101  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.206114  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:59.206122  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:59.206196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:59.245152  370051 cri.go:89] found id: ""
	I0229 02:33:59.245179  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.245188  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:59.245194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:59.245247  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:59.286047  370051 cri.go:89] found id: ""
	I0229 02:33:59.286080  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.286092  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:59.286101  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:59.286165  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:59.323171  370051 cri.go:89] found id: ""
	I0229 02:33:59.323203  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.323214  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:59.323222  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:59.323288  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:59.364434  370051 cri.go:89] found id: ""
	I0229 02:33:59.364464  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.364477  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:59.364485  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:59.364554  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:59.405902  370051 cri.go:89] found id: ""
	I0229 02:33:59.405929  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.405938  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:59.405948  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:59.405980  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:59.481810  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:59.481841  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:59.481858  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:59.575726  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:59.575767  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.634808  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:59.634849  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:59.702513  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:59.702552  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:58.991152  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:01.490426  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:00.587747  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:02.594677  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:01.010009  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:03.509687  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:02.219660  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:02.234037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:02.234105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:02.277956  370051 cri.go:89] found id: ""
	I0229 02:34:02.277982  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.277991  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:02.277998  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:02.278071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:02.322832  370051 cri.go:89] found id: ""
	I0229 02:34:02.322856  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.322869  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:02.322878  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:02.322949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:02.368612  370051 cri.go:89] found id: ""
	I0229 02:34:02.368646  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.368659  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:02.368668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:02.368731  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:02.412436  370051 cri.go:89] found id: ""
	I0229 02:34:02.412466  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.412479  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:02.412486  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:02.412544  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:02.448682  370051 cri.go:89] found id: ""
	I0229 02:34:02.448713  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.448724  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:02.448733  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:02.448803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:02.486676  370051 cri.go:89] found id: ""
	I0229 02:34:02.486705  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.486723  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:02.486730  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:02.486795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:02.531814  370051 cri.go:89] found id: ""
	I0229 02:34:02.531841  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.531852  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:02.531860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:02.531934  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:02.569800  370051 cri.go:89] found id: ""
	I0229 02:34:02.569835  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.569845  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:02.569857  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:02.569871  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:02.623903  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:02.623937  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:02.643856  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:02.643884  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:02.735520  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:02.735544  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:02.735563  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:02.816572  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:02.816612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.371459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:05.385179  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:05.385255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:05.424653  370051 cri.go:89] found id: ""
	I0229 02:34:05.424679  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.424687  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:05.424694  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:05.424752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:05.463726  370051 cri.go:89] found id: ""
	I0229 02:34:05.463754  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.463763  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:05.463769  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:05.463823  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:05.510367  370051 cri.go:89] found id: ""
	I0229 02:34:05.510396  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.510407  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:05.510415  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:05.510480  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:05.548421  370051 cri.go:89] found id: ""
	I0229 02:34:05.548445  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.548455  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:05.548461  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:05.548527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:05.588778  370051 cri.go:89] found id: ""
	I0229 02:34:05.588801  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.588809  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:05.588815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:05.588875  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:05.638449  370051 cri.go:89] found id: ""
	I0229 02:34:05.638479  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.638490  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:05.638506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:05.638567  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:05.709921  370051 cri.go:89] found id: ""
	I0229 02:34:05.709950  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.709964  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:05.709972  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:05.710038  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:05.756965  370051 cri.go:89] found id: ""
	I0229 02:34:05.756992  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.757000  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:05.757010  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:05.757025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:05.826878  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:05.826904  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:05.826921  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:05.909205  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:05.909256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.954537  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:05.954594  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:06.004157  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:06.004203  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:03.989381  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.990323  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.491379  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.086296  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:07.586477  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.511758  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.009545  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:10.010247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.522975  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:08.539247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:08.539326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:08.579776  370051 cri.go:89] found id: ""
	I0229 02:34:08.579806  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.579817  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:08.579826  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:08.579890  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:08.628415  370051 cri.go:89] found id: ""
	I0229 02:34:08.628444  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.628456  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:08.628468  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:08.628534  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:08.690499  370051 cri.go:89] found id: ""
	I0229 02:34:08.690530  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.690540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:08.690547  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:08.690613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:08.739755  370051 cri.go:89] found id: ""
	I0229 02:34:08.739788  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.739801  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:08.739809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:08.739906  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:08.781693  370051 cri.go:89] found id: ""
	I0229 02:34:08.781721  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.781733  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:08.781742  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:08.781808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:08.818605  370051 cri.go:89] found id: ""
	I0229 02:34:08.818637  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.818645  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:08.818652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:08.818713  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:08.861533  370051 cri.go:89] found id: ""
	I0229 02:34:08.861559  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.861569  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:08.861578  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:08.861658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:08.902727  370051 cri.go:89] found id: ""
	I0229 02:34:08.902758  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.902771  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:08.902784  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:08.902801  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:08.948527  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:08.948567  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:08.999883  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:08.999916  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:09.015438  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:09.015467  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:09.087965  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:09.087994  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:09.088010  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:10.990135  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.991074  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:10.085517  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.086653  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:14.086817  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.510247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:15.010412  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:11.671443  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:11.702197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:11.702322  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:11.755104  370051 cri.go:89] found id: ""
	I0229 02:34:11.755136  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.755147  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:11.755153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:11.755204  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:11.794190  370051 cri.go:89] found id: ""
	I0229 02:34:11.794218  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.794239  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:11.794247  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:11.794310  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:11.837330  370051 cri.go:89] found id: ""
	I0229 02:34:11.837360  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.837372  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:11.837380  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:11.837447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:11.876682  370051 cri.go:89] found id: ""
	I0229 02:34:11.876716  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.876726  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:11.876734  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:11.876805  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:11.922172  370051 cri.go:89] found id: ""
	I0229 02:34:11.922239  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.922262  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:11.922271  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:11.922341  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:11.962218  370051 cri.go:89] found id: ""
	I0229 02:34:11.962270  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.962283  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:11.962291  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:11.962375  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:12.002075  370051 cri.go:89] found id: ""
	I0229 02:34:12.002101  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.002110  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:12.002117  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:12.002169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:12.043337  370051 cri.go:89] found id: ""
	I0229 02:34:12.043378  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.043399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:12.043412  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:12.043428  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:12.094458  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:12.094491  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:12.112374  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:12.112401  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:12.193665  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:12.193689  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:12.193717  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:12.282510  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:12.282553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:14.828451  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:14.843626  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:14.843690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:14.884181  370051 cri.go:89] found id: ""
	I0229 02:34:14.884214  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.884226  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:14.884235  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:14.884302  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:14.926312  370051 cri.go:89] found id: ""
	I0229 02:34:14.926347  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.926361  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:14.926369  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:14.926436  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:14.969147  370051 cri.go:89] found id: ""
	I0229 02:34:14.969182  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.969195  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:14.969207  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:14.969277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:15.013000  370051 cri.go:89] found id: ""
	I0229 02:34:15.013045  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.013055  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:15.013064  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:15.013120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:15.055811  370051 cri.go:89] found id: ""
	I0229 02:34:15.055849  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.055861  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:15.055869  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:15.055939  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:15.100736  370051 cri.go:89] found id: ""
	I0229 02:34:15.100768  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.100780  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:15.100789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:15.100867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:15.140115  370051 cri.go:89] found id: ""
	I0229 02:34:15.140151  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.140164  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:15.140172  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:15.140239  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:15.183545  370051 cri.go:89] found id: ""
	I0229 02:34:15.183576  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.183588  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:15.183602  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:15.183621  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:15.258646  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:15.258676  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:15.258693  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:15.347035  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:15.347082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:15.407148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:15.407178  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:15.466695  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:15.466741  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:15.490797  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.990851  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:16.585993  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:18.587604  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.509114  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:19.509856  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.989102  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:18.005052  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:18.005126  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:18.044687  370051 cri.go:89] found id: ""
	I0229 02:34:18.044714  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.044725  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:18.044739  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:18.044815  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:18.085904  370051 cri.go:89] found id: ""
	I0229 02:34:18.085934  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.085944  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:18.085952  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:18.086017  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:18.129958  370051 cri.go:89] found id: ""
	I0229 02:34:18.129985  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.129994  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:18.129999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:18.130052  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:18.166942  370051 cri.go:89] found id: ""
	I0229 02:34:18.166979  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.166991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:18.167000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:18.167056  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:18.205297  370051 cri.go:89] found id: ""
	I0229 02:34:18.205324  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.205331  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:18.205337  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:18.205410  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:18.246415  370051 cri.go:89] found id: ""
	I0229 02:34:18.246448  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.246461  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:18.246469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:18.246527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:18.285534  370051 cri.go:89] found id: ""
	I0229 02:34:18.285573  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.285585  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:18.285600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:18.285662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:18.327624  370051 cri.go:89] found id: ""
	I0229 02:34:18.327651  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.327659  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:18.327670  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:18.327684  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:18.383307  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:18.383351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:18.408127  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:18.408162  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:18.502036  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:18.502070  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:18.502093  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:18.582289  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:18.582340  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:20.490582  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:22.990210  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.086446  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:23.586600  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.511411  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:24.009976  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.135649  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:21.149411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:21.149498  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:21.198246  370051 cri.go:89] found id: ""
	I0229 02:34:21.198286  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.198298  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:21.198306  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:21.198378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:21.240168  370051 cri.go:89] found id: ""
	I0229 02:34:21.240195  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.240203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:21.240209  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:21.240275  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:21.281243  370051 cri.go:89] found id: ""
	I0229 02:34:21.281277  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.281288  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:21.281296  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:21.281359  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:21.321573  370051 cri.go:89] found id: ""
	I0229 02:34:21.321609  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.321621  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:21.321629  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:21.321693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:21.375156  370051 cri.go:89] found id: ""
	I0229 02:34:21.375212  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.375226  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:21.375234  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:21.375308  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:21.430450  370051 cri.go:89] found id: ""
	I0229 02:34:21.430487  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.430499  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:21.430508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:21.430576  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:21.475095  370051 cri.go:89] found id: ""
	I0229 02:34:21.475124  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.475135  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:21.475144  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:21.475215  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:21.517378  370051 cri.go:89] found id: ""
	I0229 02:34:21.517403  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.517412  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:21.517424  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:21.517444  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:21.534103  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:21.534147  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:21.608375  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:21.608400  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:21.608412  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:21.691912  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:21.691950  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:21.744366  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:21.744406  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.295384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:24.309456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:24.309539  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:24.370125  370051 cri.go:89] found id: ""
	I0229 02:34:24.370156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.370167  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:24.370175  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:24.370256  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:24.439458  370051 cri.go:89] found id: ""
	I0229 02:34:24.439487  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.439499  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:24.439506  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:24.439639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:24.478070  370051 cri.go:89] found id: ""
	I0229 02:34:24.478105  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.478119  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:24.478127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:24.478194  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:24.517128  370051 cri.go:89] found id: ""
	I0229 02:34:24.517156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.517168  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:24.517176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:24.517243  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:24.555502  370051 cri.go:89] found id: ""
	I0229 02:34:24.555537  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.555549  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:24.555557  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:24.555625  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:24.601261  370051 cri.go:89] found id: ""
	I0229 02:34:24.601295  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.601307  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:24.601315  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:24.601389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:24.639110  370051 cri.go:89] found id: ""
	I0229 02:34:24.639141  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.639153  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:24.639161  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:24.639224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:24.681448  370051 cri.go:89] found id: ""
	I0229 02:34:24.681478  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.681487  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:24.681498  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:24.681517  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.730735  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:24.730775  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:24.746996  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:24.747031  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:24.827581  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:24.827608  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:24.827628  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:24.909551  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:24.909596  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:24.990581  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.489787  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:25.586672  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.586999  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:26.509819  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:29.009014  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.455967  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:27.477411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:27.477487  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:27.523163  370051 cri.go:89] found id: ""
	I0229 02:34:27.523189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.523198  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:27.523203  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:27.523258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:27.562298  370051 cri.go:89] found id: ""
	I0229 02:34:27.562330  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.562343  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:27.562350  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:27.562420  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:27.603506  370051 cri.go:89] found id: ""
	I0229 02:34:27.603532  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.603540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:27.603554  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:27.603619  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:27.646971  370051 cri.go:89] found id: ""
	I0229 02:34:27.647002  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.647014  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:27.647031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:27.647109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:27.685124  370051 cri.go:89] found id: ""
	I0229 02:34:27.685149  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.685160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:27.685169  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:27.685235  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:27.726976  370051 cri.go:89] found id: ""
	I0229 02:34:27.727007  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.727018  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:27.727026  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:27.727089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:27.767159  370051 cri.go:89] found id: ""
	I0229 02:34:27.767189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.767197  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:27.767204  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:27.767272  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:27.810377  370051 cri.go:89] found id: ""
	I0229 02:34:27.810411  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.810420  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:27.810431  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:27.810447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:27.858094  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:27.858136  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:27.874407  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:27.874440  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:27.953065  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:27.953092  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:27.953108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:28.042244  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:28.042278  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:30.588227  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:30.604954  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:30.605037  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:30.642069  370051 cri.go:89] found id: ""
	I0229 02:34:30.642100  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.642108  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:30.642119  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:30.642187  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:30.686212  370051 cri.go:89] found id: ""
	I0229 02:34:30.686264  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.686277  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:30.686285  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:30.686364  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:30.726668  370051 cri.go:89] found id: ""
	I0229 02:34:30.726702  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.726715  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:30.726723  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:30.726788  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:30.766850  370051 cri.go:89] found id: ""
	I0229 02:34:30.766883  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.766895  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:30.766904  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:30.766979  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:30.808972  370051 cri.go:89] found id: ""
	I0229 02:34:30.809002  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.809015  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:30.809023  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:30.809093  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:30.851992  370051 cri.go:89] found id: ""
	I0229 02:34:30.852016  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.852025  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:30.852031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:30.852096  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:30.891100  370051 cri.go:89] found id: ""
	I0229 02:34:30.891132  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.891144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:30.891157  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:30.891227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:30.931740  370051 cri.go:89] found id: ""
	I0229 02:34:30.931768  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.931777  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:30.931787  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:30.931808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:31.010896  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:31.010919  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:31.010936  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:31.094626  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:31.094662  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:29.490211  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.490659  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:30.086898  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:32.587485  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.010003  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:33.510267  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.150765  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:31.150804  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:31.202932  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:31.202976  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:33.723355  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:33.738651  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:33.738753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:33.778255  370051 cri.go:89] found id: ""
	I0229 02:34:33.778287  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.778299  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:33.778307  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:33.778384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:33.818360  370051 cri.go:89] found id: ""
	I0229 02:34:33.818396  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.818406  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:33.818412  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:33.818564  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:33.866781  370051 cri.go:89] found id: ""
	I0229 02:34:33.866814  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.866824  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:33.866831  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:33.866891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:33.910013  370051 cri.go:89] found id: ""
	I0229 02:34:33.910051  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.910063  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:33.910072  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:33.910146  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:33.956068  370051 cri.go:89] found id: ""
	I0229 02:34:33.956098  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.956106  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:33.956113  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:33.956170  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:34.004997  370051 cri.go:89] found id: ""
	I0229 02:34:34.005027  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.005038  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:34.005047  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:34.005113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:34.059266  370051 cri.go:89] found id: ""
	I0229 02:34:34.059293  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.059302  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:34.059307  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:34.059363  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:34.105601  370051 cri.go:89] found id: ""
	I0229 02:34:34.105631  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.105643  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:34.105654  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:34.105669  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:34.208723  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:34.208764  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:34.262105  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:34.262137  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:34.314528  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:34.314571  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:34.332441  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:34.332477  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:34.406303  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:33.990257  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.490844  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:35.085482  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:37.086532  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:39.087022  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.015574  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:38.510064  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.906814  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:36.922297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:36.922377  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:36.967550  370051 cri.go:89] found id: ""
	I0229 02:34:36.967578  370051 logs.go:276] 0 containers: []
	W0229 02:34:36.967589  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:36.967599  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:36.967662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:37.007589  370051 cri.go:89] found id: ""
	I0229 02:34:37.007614  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.007624  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:37.007632  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:37.007706  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:37.048230  370051 cri.go:89] found id: ""
	I0229 02:34:37.048260  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.048273  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:37.048281  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:37.048354  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:37.089329  370051 cri.go:89] found id: ""
	I0229 02:34:37.089355  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.089365  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:37.089373  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:37.089441  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:37.144654  370051 cri.go:89] found id: ""
	I0229 02:34:37.144687  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.144699  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:37.144708  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:37.144778  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:37.203822  370051 cri.go:89] found id: ""
	I0229 02:34:37.203857  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.203868  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:37.203876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:37.203948  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:37.250369  370051 cri.go:89] found id: ""
	I0229 02:34:37.250398  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.250410  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:37.250417  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:37.250490  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:37.290924  370051 cri.go:89] found id: ""
	I0229 02:34:37.290957  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.290969  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:37.290981  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:37.290995  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:37.343878  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:37.343920  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:37.359307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:37.359336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:37.435264  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:37.435292  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:37.435309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:37.518274  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:37.518309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:40.062232  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:40.079883  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:40.079957  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:40.123826  370051 cri.go:89] found id: ""
	I0229 02:34:40.123856  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.123866  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:40.123874  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:40.123943  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:40.190273  370051 cri.go:89] found id: ""
	I0229 02:34:40.190321  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.190332  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:40.190338  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:40.190395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:40.232921  370051 cri.go:89] found id: ""
	I0229 02:34:40.232949  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.232961  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:40.232968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:40.233034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:40.273490  370051 cri.go:89] found id: ""
	I0229 02:34:40.273517  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.273526  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:40.273538  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:40.273594  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:40.317121  370051 cri.go:89] found id: ""
	I0229 02:34:40.317152  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.317163  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:40.317171  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:40.317230  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:40.363347  370051 cri.go:89] found id: ""
	I0229 02:34:40.363380  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.363389  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:40.363396  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:40.363459  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:40.407187  370051 cri.go:89] found id: ""
	I0229 02:34:40.407213  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.407222  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:40.407231  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:40.407282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:40.447185  370051 cri.go:89] found id: ""
	I0229 02:34:40.447218  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.447229  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:40.447242  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:40.447258  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:40.496998  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:40.497029  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:40.512520  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:40.512549  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:40.589150  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:40.589173  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:40.589190  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:40.677054  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:40.677096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:38.991307  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:40.992688  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.490195  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:41.585962  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.586942  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:41.009837  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.510138  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.222265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:43.236567  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:43.236629  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:43.282917  370051 cri.go:89] found id: ""
	I0229 02:34:43.282959  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.282976  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:43.282982  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:43.283049  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:43.329273  370051 cri.go:89] found id: ""
	I0229 02:34:43.329302  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.329313  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:43.329321  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:43.329386  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:43.366696  370051 cri.go:89] found id: ""
	I0229 02:34:43.366723  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.366732  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:43.366739  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:43.366800  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:43.405793  370051 cri.go:89] found id: ""
	I0229 02:34:43.405820  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.405828  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:43.405834  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:43.405888  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:43.442870  370051 cri.go:89] found id: ""
	I0229 02:34:43.442898  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.442906  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:43.442912  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:43.442964  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:43.484581  370051 cri.go:89] found id: ""
	I0229 02:34:43.484615  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.484626  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:43.484635  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:43.484702  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:43.530931  370051 cri.go:89] found id: ""
	I0229 02:34:43.530954  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.530963  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:43.530968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:43.531024  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:43.572810  370051 cri.go:89] found id: ""
	I0229 02:34:43.572838  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.572850  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:43.572867  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:43.572883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:43.622815  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:43.622854  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:43.637972  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:43.638012  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:43.713704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:43.713728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:43.713746  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:43.797178  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:43.797220  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:45.490670  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:47.989828  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:45.587464  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:48.090384  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:46.009454  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:48.010403  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:46.347159  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:46.361601  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:46.361682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:46.399751  370051 cri.go:89] found id: ""
	I0229 02:34:46.399784  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.399795  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:46.399804  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:46.399870  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:46.445367  370051 cri.go:89] found id: ""
	I0229 02:34:46.445398  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.445407  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:46.445413  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:46.445486  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:46.490323  370051 cri.go:89] found id: ""
	I0229 02:34:46.490363  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.490385  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:46.490393  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:46.490473  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:46.531406  370051 cri.go:89] found id: ""
	I0229 02:34:46.531441  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.531450  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:46.531456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:46.531507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:46.572759  370051 cri.go:89] found id: ""
	I0229 02:34:46.572787  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.572795  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:46.572804  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:46.572908  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:46.613055  370051 cri.go:89] found id: ""
	I0229 02:34:46.613083  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.613093  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:46.613099  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:46.613153  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:46.657504  370051 cri.go:89] found id: ""
	I0229 02:34:46.657536  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.657544  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:46.657550  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:46.657605  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:46.698008  370051 cri.go:89] found id: ""
	I0229 02:34:46.698057  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.698068  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:46.698080  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:46.698097  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:46.746648  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:46.746682  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:46.761190  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:46.761219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:46.843379  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:46.843403  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:46.843415  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:46.933493  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:46.933546  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:49.491837  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:49.508647  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:49.508717  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:49.550752  370051 cri.go:89] found id: ""
	I0229 02:34:49.550788  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.550800  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:49.550809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:49.550883  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:49.597623  370051 cri.go:89] found id: ""
	I0229 02:34:49.597663  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.597675  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:49.597683  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:49.597764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:49.635207  370051 cri.go:89] found id: ""
	I0229 02:34:49.635230  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.635238  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:49.635282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:49.635336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:49.674664  370051 cri.go:89] found id: ""
	I0229 02:34:49.674696  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.674708  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:49.674716  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:49.674777  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:49.715391  370051 cri.go:89] found id: ""
	I0229 02:34:49.715420  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.715433  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:49.715442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:49.715497  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:49.753318  370051 cri.go:89] found id: ""
	I0229 02:34:49.753352  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.753373  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:49.753382  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:49.753451  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:49.791342  370051 cri.go:89] found id: ""
	I0229 02:34:49.791369  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.791377  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:49.791384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:49.791456  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:49.838148  370051 cri.go:89] found id: ""
	I0229 02:34:49.838181  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.838191  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:49.838204  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:49.838244  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:49.891532  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:49.891568  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:49.917625  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:49.917664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:50.019436  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:50.019457  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:50.019472  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:50.108302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:50.108349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:49.991272  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.491139  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:50.586652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.586940  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:50.509504  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:53.010818  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.654561  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:52.668331  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:52.668402  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:52.718431  370051 cri.go:89] found id: ""
	I0229 02:34:52.718471  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.718484  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:52.718493  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:52.718551  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:52.757913  370051 cri.go:89] found id: ""
	I0229 02:34:52.757946  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.757957  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:52.757965  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:52.758035  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:52.796792  370051 cri.go:89] found id: ""
	I0229 02:34:52.796821  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.796833  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:52.796842  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:52.796913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:52.832157  370051 cri.go:89] found id: ""
	I0229 02:34:52.832187  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.832196  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:52.832203  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:52.832264  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:52.879170  370051 cri.go:89] found id: ""
	I0229 02:34:52.879197  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.879206  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:52.879212  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:52.879265  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:52.924219  370051 cri.go:89] found id: ""
	I0229 02:34:52.924249  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.924258  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:52.924264  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:52.924318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:52.980422  370051 cri.go:89] found id: ""
	I0229 02:34:52.980450  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.980457  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:52.980463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:52.980525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:53.026393  370051 cri.go:89] found id: ""
	I0229 02:34:53.026418  370051 logs.go:276] 0 containers: []
	W0229 02:34:53.026426  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:53.026436  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:53.026453  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:53.075135  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:53.075174  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:53.092197  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:53.092223  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:53.164397  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:53.164423  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:53.164439  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:53.250310  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:53.250366  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:55.792993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:55.807152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:55.807229  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:55.867791  370051 cri.go:89] found id: ""
	I0229 02:34:55.867821  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.867830  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:55.867847  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:55.867925  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:55.922960  370051 cri.go:89] found id: ""
	I0229 02:34:55.922989  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.923001  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:55.923009  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:55.923076  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:55.972510  370051 cri.go:89] found id: ""
	I0229 02:34:55.972541  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.972552  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:55.972560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:55.972632  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:56.011948  370051 cri.go:89] found id: ""
	I0229 02:34:56.011980  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.011990  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:56.011999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:56.012077  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:56.052624  370051 cri.go:89] found id: ""
	I0229 02:34:56.052653  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.052662  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:56.052668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:56.052722  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:56.089075  370051 cri.go:89] found id: ""
	I0229 02:34:56.089100  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.089108  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:56.089114  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:56.089180  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:56.130369  370051 cri.go:89] found id: ""
	I0229 02:34:56.130403  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.130416  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:56.130424  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:56.130496  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:54.989569  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:56.991424  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:55.085652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:57.585291  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:59.586439  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:55.509734  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:57.510165  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:59.511749  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:56.177812  370051 cri.go:89] found id: ""
	I0229 02:34:56.177843  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.177854  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:56.177875  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:56.177894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:56.224294  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:56.224336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:56.275874  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:56.275909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:56.291172  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:56.291202  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:56.364839  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:56.364870  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:56.364888  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:58.950871  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:58.966327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:58.966389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:59.005914  370051 cri.go:89] found id: ""
	I0229 02:34:59.005952  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.005968  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:59.005976  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:59.006045  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:59.043962  370051 cri.go:89] found id: ""
	I0229 02:34:59.043993  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.044005  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:59.044013  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:59.044167  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:59.089398  370051 cri.go:89] found id: ""
	I0229 02:34:59.089426  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.089434  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:59.089440  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:59.089491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:59.130786  370051 cri.go:89] found id: ""
	I0229 02:34:59.130815  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.130824  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:59.130830  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:59.130909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:59.174807  370051 cri.go:89] found id: ""
	I0229 02:34:59.174836  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.174848  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:59.174855  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:59.174929  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:59.217745  370051 cri.go:89] found id: ""
	I0229 02:34:59.217792  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.217800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:59.217806  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:59.217858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:59.260906  370051 cri.go:89] found id: ""
	I0229 02:34:59.260939  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.260950  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:59.260957  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:59.261025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:59.299114  370051 cri.go:89] found id: ""
	I0229 02:34:59.299140  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.299150  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:59.299161  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:59.299173  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:59.349630  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:59.349672  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:59.365679  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:59.365710  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:59.438234  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:59.438261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:59.438280  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:59.524185  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:59.524219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:58.991975  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:01.489719  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:03.490315  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.087731  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:04.585197  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.008802  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:04.509210  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.068320  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:02.082910  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:02.082988  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:02.122095  370051 cri.go:89] found id: ""
	I0229 02:35:02.122132  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.122145  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:02.122153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:02.122245  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:02.160982  370051 cri.go:89] found id: ""
	I0229 02:35:02.161013  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.161029  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:02.161043  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:02.161108  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:02.200603  370051 cri.go:89] found id: ""
	I0229 02:35:02.200637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.200650  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:02.200658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:02.200746  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:02.243100  370051 cri.go:89] found id: ""
	I0229 02:35:02.243126  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.243134  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:02.243140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:02.243207  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:02.282758  370051 cri.go:89] found id: ""
	I0229 02:35:02.282793  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.282806  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:02.282815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:02.282884  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:02.324402  370051 cri.go:89] found id: ""
	I0229 02:35:02.324434  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.324444  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:02.324455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:02.324520  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:02.368608  370051 cri.go:89] found id: ""
	I0229 02:35:02.368637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.368650  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:02.368658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:02.368726  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:02.411449  370051 cri.go:89] found id: ""
	I0229 02:35:02.411484  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.411497  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:02.411509  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:02.411526  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:02.427942  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:02.427974  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:02.498848  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:02.498884  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:02.498902  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:02.585701  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:02.585749  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:02.642055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:02.642096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.201769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:05.215944  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:05.216020  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:05.254080  370051 cri.go:89] found id: ""
	I0229 02:35:05.254107  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.254121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:05.254128  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:05.254179  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:05.296990  370051 cri.go:89] found id: ""
	I0229 02:35:05.297022  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.297034  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:05.297042  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:05.297111  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:05.336241  370051 cri.go:89] found id: ""
	I0229 02:35:05.336275  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.336290  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:05.336299  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:05.336395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:05.377620  370051 cri.go:89] found id: ""
	I0229 02:35:05.377649  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.377658  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:05.377664  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:05.377712  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:05.416275  370051 cri.go:89] found id: ""
	I0229 02:35:05.416303  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.416311  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:05.416318  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:05.416373  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:05.455375  370051 cri.go:89] found id: ""
	I0229 02:35:05.455412  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.455426  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:05.455436  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:05.455507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:05.495862  370051 cri.go:89] found id: ""
	I0229 02:35:05.495887  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.495897  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:05.495905  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:05.495969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:05.541218  370051 cri.go:89] found id: ""
	I0229 02:35:05.541247  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.541260  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:05.541273  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:05.541288  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:05.629982  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:05.630023  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:05.719026  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:05.719066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.785318  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:05.785359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:05.801181  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:05.801214  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:05.871333  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:05.490857  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:07.991044  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:06.587458  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:09.086313  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:06.510265  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:08.510391  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:08.371982  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:08.386451  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:08.386514  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:08.430045  370051 cri.go:89] found id: ""
	I0229 02:35:08.430077  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.430090  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:08.430099  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:08.430169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:08.470547  370051 cri.go:89] found id: ""
	I0229 02:35:08.470583  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.470596  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:08.470604  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:08.470671  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:08.512637  370051 cri.go:89] found id: ""
	I0229 02:35:08.512676  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.512687  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:08.512695  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:08.512759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:08.556228  370051 cri.go:89] found id: ""
	I0229 02:35:08.556263  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.556271  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:08.556277  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:08.556335  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:08.613838  370051 cri.go:89] found id: ""
	I0229 02:35:08.613868  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.613878  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:08.613884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:08.613940  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:08.686408  370051 cri.go:89] found id: ""
	I0229 02:35:08.686442  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.686454  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:08.686462  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:08.686519  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:08.725665  370051 cri.go:89] found id: ""
	I0229 02:35:08.725697  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.725710  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:08.725719  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:08.725776  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:08.765639  370051 cri.go:89] found id: ""
	I0229 02:35:08.765666  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.765674  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:08.765684  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:08.765695  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:08.813097  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:08.813135  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:08.828880  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:08.828909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:08.903237  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:08.903261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:08.903281  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:08.991710  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:08.991745  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:10.491022  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:12.491159  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.086828  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:13.586274  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.009650  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:13.011571  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.536724  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:11.551614  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:11.551690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:11.593078  370051 cri.go:89] found id: ""
	I0229 02:35:11.593110  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.593121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:11.593129  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:11.593185  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:11.645696  370051 cri.go:89] found id: ""
	I0229 02:35:11.645729  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.645742  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:11.645751  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:11.645820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:11.691181  370051 cri.go:89] found id: ""
	I0229 02:35:11.691213  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.691226  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:11.691245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:11.691318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:11.745906  370051 cri.go:89] found id: ""
	I0229 02:35:11.745933  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.745946  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:11.745953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:11.746019  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:11.784895  370051 cri.go:89] found id: ""
	I0229 02:35:11.784927  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.784940  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:11.784949  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:11.785025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:11.825341  370051 cri.go:89] found id: ""
	I0229 02:35:11.825372  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.825384  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:11.825392  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:11.825464  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:11.862454  370051 cri.go:89] found id: ""
	I0229 02:35:11.862492  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.862505  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:11.862523  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:11.862604  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:11.908424  370051 cri.go:89] found id: ""
	I0229 02:35:11.908450  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.908459  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:11.908469  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:11.908487  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:11.956274  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:11.956313  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:11.972363  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:11.972397  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:12.052030  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:12.052057  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:12.052078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:12.138388  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:12.138431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:14.691474  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:14.724652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:14.724739  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:14.765210  370051 cri.go:89] found id: ""
	I0229 02:35:14.765237  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.765246  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:14.765253  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:14.765306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:14.808226  370051 cri.go:89] found id: ""
	I0229 02:35:14.808258  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.808270  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:14.808287  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:14.808357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:14.847999  370051 cri.go:89] found id: ""
	I0229 02:35:14.848030  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.848041  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:14.848049  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:14.848123  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:14.887221  370051 cri.go:89] found id: ""
	I0229 02:35:14.887248  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.887256  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:14.887263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:14.887339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:14.929905  370051 cri.go:89] found id: ""
	I0229 02:35:14.929933  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.929950  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:14.929956  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:14.930011  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:14.969697  370051 cri.go:89] found id: ""
	I0229 02:35:14.969739  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.969761  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:14.969770  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:14.969837  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:15.013387  370051 cri.go:89] found id: ""
	I0229 02:35:15.013418  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.013429  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:15.013437  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:15.013493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:15.058199  370051 cri.go:89] found id: ""
	I0229 02:35:15.058240  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.058253  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:15.058270  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:15.058287  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:15.110165  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:15.110213  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:15.127417  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:15.127452  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:15.203330  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:15.203370  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:15.203405  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:15.283455  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:15.283501  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:14.991352  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.490127  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:15.586556  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:18.085962  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:15.509530  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.512518  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:20.009873  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.829187  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:17.844678  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:17.844759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:17.885549  370051 cri.go:89] found id: ""
	I0229 02:35:17.885581  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.885594  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:17.885601  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:17.885670  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:17.925652  370051 cri.go:89] found id: ""
	I0229 02:35:17.925679  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.925691  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:17.925699  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:17.925766  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:17.963172  370051 cri.go:89] found id: ""
	I0229 02:35:17.963203  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.963215  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:17.963224  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:17.963282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:18.003528  370051 cri.go:89] found id: ""
	I0229 02:35:18.003560  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.003572  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:18.003579  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:18.003644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:18.046494  370051 cri.go:89] found id: ""
	I0229 02:35:18.046526  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.046537  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:18.046545  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:18.046613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:18.084963  370051 cri.go:89] found id: ""
	I0229 02:35:18.084993  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.085004  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:18.085013  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:18.085074  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:18.125521  370051 cri.go:89] found id: ""
	I0229 02:35:18.125547  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.125556  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:18.125563  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:18.125623  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:18.169963  370051 cri.go:89] found id: ""
	I0229 02:35:18.169995  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.170006  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:18.170020  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:18.170035  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:18.225414  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:18.225460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:18.242069  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:18.242108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:18.312704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:18.312728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:18.312742  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:18.397206  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:18.397249  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:20.968000  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:20.983115  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:20.983196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:21.025710  370051 cri.go:89] found id: ""
	I0229 02:35:21.025735  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.025743  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:21.025749  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:21.025812  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:21.065825  370051 cri.go:89] found id: ""
	I0229 02:35:21.065854  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.065862  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:21.065868  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:21.065928  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:21.104738  370051 cri.go:89] found id: ""
	I0229 02:35:21.104770  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.104782  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:21.104790  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:21.104871  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:19.990622  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.491026  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.491059  369591 pod_ready.go:81] duration metric: took 4m0.008454624s waiting for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	E0229 02:35:22.491069  369591 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:35:22.491077  369591 pod_ready.go:38] duration metric: took 4m5.576507129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:22.491094  369591 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:35:22.491124  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:22.491174  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:22.562384  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:22.562412  369591 cri.go:89] found id: ""
	I0229 02:35:22.562422  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:22.562487  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.567997  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:22.568073  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:22.632786  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:22.632811  369591 cri.go:89] found id: ""
	I0229 02:35:22.632822  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:22.632887  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.637899  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:22.637975  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:22.681988  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:22.682014  369591 cri.go:89] found id: ""
	I0229 02:35:22.682024  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:22.682084  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.687515  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:22.687606  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:22.732907  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:22.732931  369591 cri.go:89] found id: ""
	I0229 02:35:22.732939  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:22.732995  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.737695  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:22.737758  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:22.779316  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:22.779341  369591 cri.go:89] found id: ""
	I0229 02:35:22.779349  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:22.779413  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.786533  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:22.786617  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:22.834391  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:22.834420  369591 cri.go:89] found id: ""
	I0229 02:35:22.834430  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:22.834500  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.839386  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:22.839458  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:22.881275  369591 cri.go:89] found id: ""
	I0229 02:35:22.881304  369591 logs.go:276] 0 containers: []
	W0229 02:35:22.881317  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:22.881326  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:22.881404  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:22.932822  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:22.932846  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:22.932850  369591 cri.go:89] found id: ""
	I0229 02:35:22.932858  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:22.932913  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.938541  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.943263  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:22.943288  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:22.994089  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:22.994122  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:23.051780  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:23.051821  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:23.099220  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:23.099251  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:23.157383  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:23.157429  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:23.206125  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:23.206180  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:23.261950  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:23.261982  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:23.324394  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:23.324427  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:23.400608  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:23.400648  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:20.589079  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:23.088469  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.510074  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:24.002388  369869 pod_ready.go:81] duration metric: took 4m0.000212386s waiting for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" ...
	E0229 02:35:24.002420  369869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:35:24.002439  369869 pod_ready.go:38] duration metric: took 4m6.701505951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:24.002490  369869 kubeadm.go:640] restartCluster took 4m24.423602043s
	W0229 02:35:24.002593  369869 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:35:24.002621  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:35:21.147180  370051 cri.go:89] found id: ""
	I0229 02:35:21.147211  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.147221  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:21.147228  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:21.147284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:21.187240  370051 cri.go:89] found id: ""
	I0229 02:35:21.187275  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.187287  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:21.187295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:21.187389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:21.228873  370051 cri.go:89] found id: ""
	I0229 02:35:21.228899  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.228917  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:21.228924  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:21.228992  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:21.268827  370051 cri.go:89] found id: ""
	I0229 02:35:21.268856  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.268867  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:21.268876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:21.268970  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:21.313253  370051 cri.go:89] found id: ""
	I0229 02:35:21.313288  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.313297  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:21.313307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:21.313328  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:21.448089  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:21.448120  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:21.448146  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:21.539941  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:21.539983  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:21.590148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:21.590186  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:21.647760  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:21.647797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:24.165842  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:24.183263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:24.183345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:24.233173  370051 cri.go:89] found id: ""
	I0229 02:35:24.233208  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.233219  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:24.233228  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:24.233301  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:24.276937  370051 cri.go:89] found id: ""
	I0229 02:35:24.276977  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.276989  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:24.276998  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:24.277066  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:24.314629  370051 cri.go:89] found id: ""
	I0229 02:35:24.314665  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.314678  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:24.314686  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:24.314753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:24.367585  370051 cri.go:89] found id: ""
	I0229 02:35:24.367618  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.367630  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:24.367639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:24.367709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:24.451128  370051 cri.go:89] found id: ""
	I0229 02:35:24.451151  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.451160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:24.451167  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:24.451258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:24.497302  370051 cri.go:89] found id: ""
	I0229 02:35:24.497336  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.497348  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:24.497357  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:24.497431  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:24.544593  370051 cri.go:89] found id: ""
	I0229 02:35:24.544621  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.544632  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:24.544640  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:24.544714  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:24.584570  370051 cri.go:89] found id: ""
	I0229 02:35:24.584601  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.584613  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:24.584626  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:24.584645  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:24.669019  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:24.669044  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:24.669061  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:24.752163  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:24.752205  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:24.811945  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:24.811985  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:24.874832  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:24.874873  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:23.928222  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:23.928275  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:23.983171  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:23.983216  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:23.999343  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:23.999382  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:24.180422  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:24.180476  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:26.745283  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:26.768785  369591 api_server.go:72] duration metric: took 4m17.549714658s to wait for apiserver process to appear ...
	I0229 02:35:26.768823  369591 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:35:26.768885  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:26.768949  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:26.816275  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:26.816303  369591 cri.go:89] found id: ""
	I0229 02:35:26.816314  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:26.816379  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.820985  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:26.821062  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:26.870520  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:26.870545  369591 cri.go:89] found id: ""
	I0229 02:35:26.870555  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:26.870613  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.875785  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:26.875869  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:26.926844  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:26.926884  369591 cri.go:89] found id: ""
	I0229 02:35:26.926895  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:26.926963  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.933667  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:26.933747  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:26.988547  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:26.988575  369591 cri.go:89] found id: ""
	I0229 02:35:26.988584  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:26.988645  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.994520  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:26.994600  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:27.040568  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:27.040602  369591 cri.go:89] found id: ""
	I0229 02:35:27.040612  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:27.040679  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.046103  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:27.046161  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:27.094322  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:27.094345  369591 cri.go:89] found id: ""
	I0229 02:35:27.094357  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:27.094428  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.101702  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:27.101779  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:27.164549  369591 cri.go:89] found id: ""
	I0229 02:35:27.164584  369591 logs.go:276] 0 containers: []
	W0229 02:35:27.164596  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:27.164604  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:27.164674  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:27.219403  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:27.219431  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:27.219436  369591 cri.go:89] found id: ""
	I0229 02:35:27.219447  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:27.219510  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.226705  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.233551  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:27.233576  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:27.281111  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:27.281152  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:27.333686  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:27.333738  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:27.948683  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:27.948736  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:28.018866  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:28.018917  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:28.164820  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:28.164857  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:28.222926  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:28.222963  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:28.265708  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:28.265738  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:28.309311  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:28.309352  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:28.363295  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:28.363341  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:28.384099  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:28.384146  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:28.451988  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:28.452025  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:28.499748  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:28.499783  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:25.586753  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:27.589329  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:27.392846  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:27.419255  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:27.419339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:27.465294  370051 cri.go:89] found id: ""
	I0229 02:35:27.465325  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.465337  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:27.465345  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:27.465417  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:27.533393  370051 cri.go:89] found id: ""
	I0229 02:35:27.533424  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.533433  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:27.533441  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:27.533510  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:27.587195  370051 cri.go:89] found id: ""
	I0229 02:35:27.587221  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.587232  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:27.587240  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:27.587313  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:27.638597  370051 cri.go:89] found id: ""
	I0229 02:35:27.638624  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.638632  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:27.638639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:27.638709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:27.687695  370051 cri.go:89] found id: ""
	I0229 02:35:27.687730  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.687742  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:27.687750  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:27.687825  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:27.732275  370051 cri.go:89] found id: ""
	I0229 02:35:27.732309  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.732320  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:27.732327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:27.732389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:27.783069  370051 cri.go:89] found id: ""
	I0229 02:35:27.783109  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.783122  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:27.783133  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:27.783224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:27.832385  370051 cri.go:89] found id: ""
	I0229 02:35:27.832416  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.832429  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:27.832443  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:27.832460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:27.902610  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:27.902658  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:27.919900  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:27.919947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:28.003313  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:28.003337  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:28.003356  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:28.100814  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:28.100853  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:30.654289  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:30.683056  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:30.683141  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:30.734678  370051 cri.go:89] found id: ""
	I0229 02:35:30.734704  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.734712  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:30.734719  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:30.734771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:30.780792  370051 cri.go:89] found id: ""
	I0229 02:35:30.780821  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.780830  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:30.780837  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:30.780904  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:30.827244  370051 cri.go:89] found id: ""
	I0229 02:35:30.827269  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.827278  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:30.827285  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:30.827336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:30.871305  370051 cri.go:89] found id: ""
	I0229 02:35:30.871333  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.871342  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:30.871348  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:30.871423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:30.910095  370051 cri.go:89] found id: ""
	I0229 02:35:30.910121  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.910130  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:30.910136  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:30.910188  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:30.955234  370051 cri.go:89] found id: ""
	I0229 02:35:30.955261  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.955271  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:30.955278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:30.955345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:30.996555  370051 cri.go:89] found id: ""
	I0229 02:35:30.996589  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.996602  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:30.996611  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:30.996687  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:31.036424  370051 cri.go:89] found id: ""
	I0229 02:35:31.036454  370051 logs.go:276] 0 containers: []
	W0229 02:35:31.036464  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:31.036474  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:31.036488  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:31.107928  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:31.107987  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:31.125268  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:31.125303  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:31.053142  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:35:31.060477  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0229 02:35:31.062106  369591 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:35:31.062143  369591 api_server.go:131] duration metric: took 4.2933111s to wait for apiserver health ...
	I0229 02:35:31.062154  369591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:35:31.062189  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:31.062278  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:31.119877  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:31.119905  369591 cri.go:89] found id: ""
	I0229 02:35:31.119915  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:31.119981  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.125569  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:31.125648  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:31.193662  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:31.193693  369591 cri.go:89] found id: ""
	I0229 02:35:31.193704  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:31.193762  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.199267  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:31.199365  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:31.251832  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:31.251862  369591 cri.go:89] found id: ""
	I0229 02:35:31.251873  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:31.251935  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.258374  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:31.258477  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:31.309718  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:31.309745  369591 cri.go:89] found id: ""
	I0229 02:35:31.309753  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:31.309804  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.314949  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:31.315025  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:31.367936  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:31.367960  369591 cri.go:89] found id: ""
	I0229 02:35:31.367970  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:31.368038  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.373072  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:31.373137  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:31.420362  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:31.420390  369591 cri.go:89] found id: ""
	I0229 02:35:31.420402  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:31.420470  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.427151  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:31.427221  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:31.482289  369591 cri.go:89] found id: ""
	I0229 02:35:31.482321  369591 logs.go:276] 0 containers: []
	W0229 02:35:31.482333  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:31.482342  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:31.482405  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:31.526713  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:31.526738  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:31.526744  369591 cri.go:89] found id: ""
	I0229 02:35:31.526755  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:31.526807  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.531874  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.536727  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:31.536758  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:31.555901  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:31.555943  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:31.689587  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:31.689629  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:31.737625  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:31.737669  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:31.781015  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:31.781050  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:31.824727  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:31.824757  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:31.866867  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:31.866897  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:31.920324  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:31.920375  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:31.962783  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:31.962815  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:32.003525  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:32.003557  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:32.061377  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:32.061417  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:32.454041  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:32.454097  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:32.498969  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:32.499006  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:30.086688  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:32.087795  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:34.585435  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:35.060469  369591 system_pods.go:59] 8 kube-system pods found
	I0229 02:35:35.060503  369591 system_pods.go:61] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running
	I0229 02:35:35.060509  369591 system_pods.go:61] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running
	I0229 02:35:35.060516  369591 system_pods.go:61] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running
	I0229 02:35:35.060521  369591 system_pods.go:61] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running
	I0229 02:35:35.060525  369591 system_pods.go:61] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running
	I0229 02:35:35.060530  369591 system_pods.go:61] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running
	I0229 02:35:35.060538  369591 system_pods.go:61] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:35:35.060543  369591 system_pods.go:61] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running
	I0229 02:35:35.060553  369591 system_pods.go:74] duration metric: took 3.99838967s to wait for pod list to return data ...
	I0229 02:35:35.060563  369591 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:35:35.063638  369591 default_sa.go:45] found service account: "default"
	I0229 02:35:35.063665  369591 default_sa.go:55] duration metric: took 3.094531ms for default service account to be created ...
	I0229 02:35:35.063676  369591 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:35:35.071344  369591 system_pods.go:86] 8 kube-system pods found
	I0229 02:35:35.071366  369591 system_pods.go:89] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running
	I0229 02:35:35.071371  369591 system_pods.go:89] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running
	I0229 02:35:35.071375  369591 system_pods.go:89] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running
	I0229 02:35:35.071380  369591 system_pods.go:89] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running
	I0229 02:35:35.071385  369591 system_pods.go:89] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running
	I0229 02:35:35.071389  369591 system_pods.go:89] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running
	I0229 02:35:35.071397  369591 system_pods.go:89] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:35:35.071408  369591 system_pods.go:89] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running
	I0229 02:35:35.071420  369591 system_pods.go:126] duration metric: took 7.737446ms to wait for k8s-apps to be running ...
	I0229 02:35:35.071433  369591 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:35:35.071482  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:35.091472  369591 system_svc.go:56] duration metric: took 20.031453ms WaitForService to wait for kubelet.
	I0229 02:35:35.091504  369591 kubeadm.go:581] duration metric: took 4m25.872454283s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:35:35.091523  369591 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:35:35.095487  369591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:35:35.095509  369591 node_conditions.go:123] node cpu capacity is 2
	I0229 02:35:35.095546  369591 node_conditions.go:105] duration metric: took 4.018229ms to run NodePressure ...
	I0229 02:35:35.095567  369591 start.go:228] waiting for startup goroutines ...
	I0229 02:35:35.095580  369591 start.go:233] waiting for cluster config update ...
	I0229 02:35:35.095594  369591 start.go:242] writing updated cluster config ...
	I0229 02:35:35.095888  369591 ssh_runner.go:195] Run: rm -f paused
	I0229 02:35:35.154197  369591 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 02:35:35.156089  369591 out.go:177] * Done! kubectl is now configured to use "no-preload-247751" cluster and "default" namespace by default
	W0229 02:35:31.217691  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:31.217717  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:31.217740  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:31.313847  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:31.313883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:33.861648  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:33.876887  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:33.876954  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:33.921545  370051 cri.go:89] found id: ""
	I0229 02:35:33.921577  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.921588  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:33.921597  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:33.921658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:33.972558  370051 cri.go:89] found id: ""
	I0229 02:35:33.972584  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.972592  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:33.972599  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:33.972662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:34.020821  370051 cri.go:89] found id: ""
	I0229 02:35:34.020852  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.020862  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:34.020873  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:34.020937  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:34.064076  370051 cri.go:89] found id: ""
	I0229 02:35:34.064110  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.064121  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:34.064129  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:34.064191  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:34.108523  370051 cri.go:89] found id: ""
	I0229 02:35:34.108557  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.108568  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:34.108576  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:34.108639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:34.149444  370051 cri.go:89] found id: ""
	I0229 02:35:34.149468  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.149478  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:34.149487  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:34.149562  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:34.193780  370051 cri.go:89] found id: ""
	I0229 02:35:34.193805  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.193814  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:34.193820  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:34.193913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:34.237088  370051 cri.go:89] found id: ""
	I0229 02:35:34.237118  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.237127  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:34.237137  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:34.237151  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:34.281055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:34.281091  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:34.333886  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:34.333925  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:34.353163  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:34.353204  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:34.465925  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:34.465951  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:34.465969  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:36.587119  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:39.086456  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:37.049957  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:37.064297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:37.064384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:37.105669  370051 cri.go:89] found id: ""
	I0229 02:35:37.105703  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.105711  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:37.105720  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:37.105790  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:37.143753  370051 cri.go:89] found id: ""
	I0229 02:35:37.143788  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.143799  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:37.143808  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:37.143880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:37.180126  370051 cri.go:89] found id: ""
	I0229 02:35:37.180157  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.180166  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:37.180173  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:37.180227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:37.221135  370051 cri.go:89] found id: ""
	I0229 02:35:37.221173  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.221185  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:37.221193  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:37.221261  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:37.258888  370051 cri.go:89] found id: ""
	I0229 02:35:37.258920  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.258932  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:37.258940  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:37.259005  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:37.300970  370051 cri.go:89] found id: ""
	I0229 02:35:37.300998  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.301010  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:37.301018  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:37.301105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:37.349797  370051 cri.go:89] found id: ""
	I0229 02:35:37.349829  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.349841  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:37.349850  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:37.349916  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:37.408726  370051 cri.go:89] found id: ""
	I0229 02:35:37.408762  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.408773  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:37.408787  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:37.408805  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:37.462030  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:37.462064  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:37.477836  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:37.477868  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:37.553886  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:37.553924  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:37.553941  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:37.644637  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:37.644683  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:40.197937  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:40.212830  370051 kubeadm.go:640] restartCluster took 4m14.648338345s
	W0229 02:35:40.212984  370051 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 02:35:40.213021  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:35:40.673169  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:40.690108  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:35:40.702424  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:35:40.713782  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:35:40.713832  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:35:40.775345  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:35:40.775527  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:35:40.929045  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:35:40.929185  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:35:40.929310  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:35:41.154311  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:35:41.154449  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:35:41.162905  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:35:41.317651  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:35:41.319260  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:35:41.319358  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:35:41.319458  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:35:41.319564  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:35:41.319675  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:35:41.319772  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:35:41.319857  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:35:41.319963  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:35:41.320066  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:35:41.320166  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:35:41.320289  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:35:41.320357  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:35:41.320439  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:35:41.457291  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:35:41.599703  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:35:41.766344  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:35:41.939397  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:35:41.940740  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:35:41.090698  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:43.585822  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:41.942544  370051 out.go:204]   - Booting up control plane ...
	I0229 02:35:41.942656  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:35:41.946949  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:35:41.949540  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:35:41.950426  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:35:41.953310  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:35:45.586855  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:48.085961  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:50.585602  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:52.587992  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:55.085046  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:57.086710  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:59.590441  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:57.264698  369869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.262039409s)
	I0229 02:35:57.264826  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:57.285615  369869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:35:57.297607  369869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:35:57.309412  369869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:35:57.309471  369869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:35:57.540175  369869 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:36:02.086317  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:04.587625  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:06.714158  369869 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:36:06.714249  369869 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:36:06.714325  369869 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:36:06.714490  369869 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:36:06.714633  369869 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:36:06.714742  369869 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:36:06.716059  369869 out.go:204]   - Generating certificates and keys ...
	I0229 02:36:06.716160  369869 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:36:06.716250  369869 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:36:06.716357  369869 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:36:06.716434  369869 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:36:06.716508  369869 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:36:06.716572  369869 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:36:06.716649  369869 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:36:06.716722  369869 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:36:06.716824  369869 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:36:06.716952  369869 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:36:06.717008  369869 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:36:06.717080  369869 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:36:06.717147  369869 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:36:06.717221  369869 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:36:06.717298  369869 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:36:06.717367  369869 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:36:06.717474  369869 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:36:06.717559  369869 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:36:06.718770  369869 out.go:204]   - Booting up control plane ...
	I0229 02:36:06.718866  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:36:06.718983  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:36:06.719074  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:36:06.719230  369869 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:36:06.719364  369869 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:36:06.719431  369869 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:36:06.719628  369869 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:36:06.719749  369869 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503520 seconds
	I0229 02:36:06.719906  369869 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:36:06.720060  369869 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:36:06.720126  369869 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:36:06.720344  369869 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-071485 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:36:06.720433  369869 kubeadm.go:322] [bootstrap-token] Using token: oueq3v.8ghuyl6sece1tffl
	I0229 02:36:06.721973  369869 out.go:204]   - Configuring RBAC rules ...
	I0229 02:36:06.722107  369869 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:36:06.722252  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:36:06.722444  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:36:06.722643  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:36:06.722793  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:36:06.722937  369869 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:36:06.723081  369869 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:36:06.723119  369869 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:36:06.723188  369869 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:36:06.723198  369869 kubeadm.go:322] 
	I0229 02:36:06.723285  369869 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:36:06.723310  369869 kubeadm.go:322] 
	I0229 02:36:06.723426  369869 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:36:06.723436  369869 kubeadm.go:322] 
	I0229 02:36:06.723467  369869 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:36:06.723556  369869 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:36:06.723637  369869 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:36:06.723646  369869 kubeadm.go:322] 
	I0229 02:36:06.723713  369869 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:36:06.723722  369869 kubeadm.go:322] 
	I0229 02:36:06.723799  369869 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:36:06.723809  369869 kubeadm.go:322] 
	I0229 02:36:06.723869  369869 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:36:06.723979  369869 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:36:06.724073  369869 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:36:06.724083  369869 kubeadm.go:322] 
	I0229 02:36:06.724178  369869 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:36:06.724269  369869 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:36:06.724279  369869 kubeadm.go:322] 
	I0229 02:36:06.724389  369869 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token oueq3v.8ghuyl6sece1tffl \
	I0229 02:36:06.724520  369869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 \
	I0229 02:36:06.724552  369869 kubeadm.go:322] 	--control-plane 
	I0229 02:36:06.724560  369869 kubeadm.go:322] 
	I0229 02:36:06.724665  369869 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:36:06.724675  369869 kubeadm.go:322] 
	I0229 02:36:06.724767  369869 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token oueq3v.8ghuyl6sece1tffl \
	I0229 02:36:06.724923  369869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 
	I0229 02:36:06.724941  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:36:06.724952  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:36:06.726566  369869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:36:07.088398  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:09.587442  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:06.727880  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:36:06.786343  369869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:36:06.842349  369869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:36:06.842420  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=default-k8s-diff-port-071485 minikube.k8s.io/updated_at=2024_02_29T02_36_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:06.842428  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:07.196763  369869 ops.go:34] apiserver oom_adj: -16
	I0229 02:36:07.196958  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:07.696991  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:08.197336  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:08.697155  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:09.197955  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:09.697107  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:10.197816  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.085528  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:14.085852  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:10.697486  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:11.197744  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:11.697179  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.197614  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.697015  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:13.197983  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:13.697315  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:14.196982  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:14.698012  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:15.197896  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:15.697895  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:16.197062  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:16.697819  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:17.197222  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:17.697031  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.197683  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.697094  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.870924  369869 kubeadm.go:1088] duration metric: took 12.028572011s to wait for elevateKubeSystemPrivileges.
	I0229 02:36:18.870961  369869 kubeadm.go:406] StartCluster complete in 5m19.353203226s
	I0229 02:36:18.870986  369869 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:36:18.871077  369869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:36:18.873654  369869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:36:18.873954  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:36:18.874041  369869 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:36:18.874118  369869 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874130  369869 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874142  369869 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.874149  369869 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:36:18.874152  369869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-071485"
	I0229 02:36:18.874201  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.874256  369869 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:36:18.874341  369869 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874359  369869 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.874367  369869 addons.go:243] addon metrics-server should already be in state true
	I0229 02:36:18.874422  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.874613  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874637  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.874613  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874691  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.874811  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874846  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.892207  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0229 02:36:18.892260  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0229 02:36:18.892967  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.892986  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.893508  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.893528  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.893680  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.893700  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.893936  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.894102  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.894143  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0229 02:36:18.894331  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.894582  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.894594  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.894613  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.895109  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.895143  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.895508  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.896106  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.896142  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.898127  369869 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.898143  369869 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:36:18.898168  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.898482  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.898516  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.917303  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37069
	I0229 02:36:18.917472  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I0229 02:36:18.917747  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.917894  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.918493  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.918510  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.918654  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.918665  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.919012  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.919077  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.919229  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.919754  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.921030  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.922677  369869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:36:18.921622  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.923872  369869 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:36:18.923899  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:36:18.923919  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.925237  369869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:36:18.926153  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:36:18.924603  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0229 02:36:18.926269  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:36:18.926303  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.927739  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.928184  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.928277  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.928299  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.930032  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.930057  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.930386  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.930456  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.930614  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.930723  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.930914  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.931014  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:18.931133  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.931185  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.931533  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.931553  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.931576  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.931737  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.932033  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.932190  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:18.948311  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0229 02:36:18.949328  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.949793  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.949819  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.950313  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.950529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.952381  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.952660  369869 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:36:18.952673  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:36:18.952689  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.956332  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.956779  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.956808  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.957117  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.957313  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.957425  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.957485  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:19.128114  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:36:19.141619  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:36:19.141649  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:36:19.169945  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:36:19.187099  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:36:19.187124  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:36:19.211358  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:36:19.289856  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:36:19.289880  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:36:19.398720  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:36:19.414512  369869 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-071485" context rescaled to 1 replicas
	I0229 02:36:19.414562  369869 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.233 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:36:19.416389  369869 out.go:177] * Verifying Kubernetes components...
	I0229 02:36:15.586606  369508 pod_ready.go:81] duration metric: took 4m0.008250092s waiting for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	E0229 02:36:15.586638  369508 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:36:15.586648  369508 pod_ready.go:38] duration metric: took 4m5.573018241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:36:15.586669  369508 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:36:15.586707  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:15.586771  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:15.644937  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:15.644969  369508 cri.go:89] found id: ""
	I0229 02:36:15.644980  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:15.645054  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.653058  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:15.653137  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:15.709225  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:15.709254  369508 cri.go:89] found id: ""
	I0229 02:36:15.709264  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:15.709333  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.715304  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:15.715391  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:15.769593  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:15.769627  369508 cri.go:89] found id: ""
	I0229 02:36:15.769637  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:15.769702  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.775157  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:15.775230  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:15.820002  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:15.820030  369508 cri.go:89] found id: ""
	I0229 02:36:15.820040  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:15.820105  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.827058  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:15.827122  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:15.875030  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:15.875063  369508 cri.go:89] found id: ""
	I0229 02:36:15.875074  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:15.875142  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.880489  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:15.880555  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:15.929452  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:15.929476  369508 cri.go:89] found id: ""
	I0229 02:36:15.929484  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:15.929545  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.934321  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:15.934396  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:15.981960  369508 cri.go:89] found id: ""
	I0229 02:36:15.981997  369508 logs.go:276] 0 containers: []
	W0229 02:36:15.982006  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:15.982014  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:15.982077  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:16.034169  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:16.034196  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:16.034201  369508 cri.go:89] found id: ""
	I0229 02:36:16.034210  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:16.034281  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:16.039463  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:16.044719  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:16.044748  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:16.111048  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:16.111084  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:16.278784  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:16.278832  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:16.333048  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:16.333085  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:16.376514  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:16.376555  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:16.420840  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:16.420944  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:16.468273  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:16.468308  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:16.526001  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:16.526043  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:16.569084  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:16.569120  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:16.609818  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:16.609847  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:16.660979  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:16.661019  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:16.677397  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:16.677432  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:16.732421  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:16.732464  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:19.417788  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:36:21.277741  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.107753576s)
	I0229 02:36:21.277802  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.277815  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.277840  369869 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.066425449s)
	I0229 02:36:21.277873  369869 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 02:36:21.277840  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.149690589s)
	I0229 02:36:21.277908  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.277918  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278277  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.278323  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278331  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.278339  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.278351  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278445  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278458  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.278465  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.278474  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278519  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.278592  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278603  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.280452  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.280470  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.280482  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.300880  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.300907  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.301193  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.301217  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.572633  369869 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.154816183s)
	I0229 02:36:21.572676  369869 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-071485" to be "Ready" ...
	I0229 02:36:21.572635  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.173852857s)
	I0229 02:36:21.572814  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.572842  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.573153  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.573207  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.573215  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.573228  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.573236  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.573538  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.573575  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.573587  369869 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-071485"
	I0229 02:36:21.575111  369869 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:36:19.738493  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:36:19.758171  369508 api_server.go:72] duration metric: took 4m17.008228834s to wait for apiserver process to appear ...
	I0229 02:36:19.758199  369508 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:36:19.758281  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:19.758349  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:19.811042  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:19.811071  369508 cri.go:89] found id: ""
	I0229 02:36:19.811082  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:19.811145  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.817952  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:19.818034  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:19.871006  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:19.871033  369508 cri.go:89] found id: ""
	I0229 02:36:19.871043  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:19.871109  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.877440  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:19.877512  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:19.928043  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:19.928071  369508 cri.go:89] found id: ""
	I0229 02:36:19.928081  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:19.928142  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.935299  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:19.935363  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:19.977360  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:19.977391  369508 cri.go:89] found id: ""
	I0229 02:36:19.977402  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:19.977482  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.982361  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:19.982442  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:20.025903  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:20.025931  369508 cri.go:89] found id: ""
	I0229 02:36:20.025941  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:20.026012  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.031390  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:20.031477  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:20.080768  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:20.080792  369508 cri.go:89] found id: ""
	I0229 02:36:20.080800  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:20.080864  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.087322  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:20.087388  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:20.139067  369508 cri.go:89] found id: ""
	I0229 02:36:20.139111  369508 logs.go:276] 0 containers: []
	W0229 02:36:20.139124  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:20.139132  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:20.139195  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:20.193052  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:20.193085  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:20.193091  369508 cri.go:89] found id: ""
	I0229 02:36:20.193101  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:20.193174  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.199740  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.205385  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:20.205414  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:20.360843  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:20.360894  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:20.411077  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:20.411113  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:20.459855  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:20.459910  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:20.517056  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:20.517101  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:20.568151  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:20.568185  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:20.637131  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:20.637165  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:21.144933  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:21.144980  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:21.206565  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:21.206607  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:21.257071  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:21.257118  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:21.315541  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:21.315589  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:21.358630  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:21.358665  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:21.398170  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:21.398201  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:23.914059  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:36:23.923854  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0229 02:36:23.926443  369508 api_server.go:141] control plane version: v1.28.4
	I0229 02:36:23.926466  369508 api_server.go:131] duration metric: took 4.168260413s to wait for apiserver health ...
	I0229 02:36:23.926475  369508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:36:23.926506  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:23.926566  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:24.013825  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:24.013849  369508 cri.go:89] found id: ""
	I0229 02:36:24.013857  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:24.013913  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.019432  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:24.019506  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:24.078857  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:24.078877  369508 cri.go:89] found id: ""
	I0229 02:36:24.078885  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:24.078945  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.083761  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:24.083822  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:24.133681  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:24.133707  369508 cri.go:89] found id: ""
	I0229 02:36:24.133717  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:24.133779  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.139165  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:24.139228  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:24.185863  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:24.185883  369508 cri.go:89] found id: ""
	I0229 02:36:24.185892  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:24.185939  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.191094  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:24.191164  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:24.232922  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:24.232953  369508 cri.go:89] found id: ""
	I0229 02:36:24.232963  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:24.233031  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.238154  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:24.238252  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:24.280735  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:24.280760  369508 cri.go:89] found id: ""
	I0229 02:36:24.280769  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:24.280842  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.285497  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:24.285558  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:24.324979  369508 cri.go:89] found id: ""
	I0229 02:36:24.325007  369508 logs.go:276] 0 containers: []
	W0229 02:36:24.325016  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:24.325022  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:24.325085  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:24.370875  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:24.370908  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:24.370912  369508 cri.go:89] found id: ""
	I0229 02:36:24.370919  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:24.370973  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.378247  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.382856  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:24.382899  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:24.430889  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:24.430919  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:24.470370  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:24.470407  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:21.576300  369869 addons.go:505] enable addons completed in 2.702258052s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:36:21.582468  369869 node_ready.go:49] node "default-k8s-diff-port-071485" has status "Ready":"True"
	I0229 02:36:21.582494  369869 node_ready.go:38] duration metric: took 9.804213ms waiting for node "default-k8s-diff-port-071485" to be "Ready" ...
	I0229 02:36:21.582506  369869 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:36:21.608694  369869 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.125662  369869 pod_ready.go:92] pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.125695  369869 pod_ready.go:81] duration metric: took 1.51697387s waiting for pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.125707  369869 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.141831  369869 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.141855  369869 pod_ready.go:81] duration metric: took 16.140002ms waiting for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.141864  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.154216  369869 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.154261  369869 pod_ready.go:81] duration metric: took 12.389751ms waiting for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.154276  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.166057  369869 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.166085  369869 pod_ready.go:81] duration metric: took 11.798242ms waiting for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.166098  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gr44w" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.179414  369869 pod_ready.go:92] pod "kube-proxy-gr44w" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.179437  369869 pod_ready.go:81] duration metric: took 13.331411ms waiting for pod "kube-proxy-gr44w" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.179447  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.576569  369869 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.576597  369869 pod_ready.go:81] duration metric: took 397.142516ms waiting for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.576611  369869 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:21.953781  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:36:21.954431  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:21.954685  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:24.880947  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:24.880985  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:24.939045  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:24.939079  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:24.987109  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:24.987144  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:25.049095  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:25.049131  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:25.091654  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:25.091686  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:25.153281  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:25.153326  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:25.169544  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:25.169575  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:25.294469  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:25.294504  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:25.346867  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:25.346900  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:25.388876  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:25.388921  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:27.937848  369508 system_pods.go:59] 8 kube-system pods found
	I0229 02:36:27.937878  369508 system_pods.go:61] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running
	I0229 02:36:27.937883  369508 system_pods.go:61] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running
	I0229 02:36:27.937888  369508 system_pods.go:61] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running
	I0229 02:36:27.937891  369508 system_pods.go:61] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running
	I0229 02:36:27.937894  369508 system_pods.go:61] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:36:27.937898  369508 system_pods.go:61] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running
	I0229 02:36:27.937903  369508 system_pods.go:61] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:36:27.937908  369508 system_pods.go:61] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:36:27.937922  369508 system_pods.go:74] duration metric: took 4.011440564s to wait for pod list to return data ...
	I0229 02:36:27.937933  369508 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:36:27.940602  369508 default_sa.go:45] found service account: "default"
	I0229 02:36:27.940623  369508 default_sa.go:55] duration metric: took 2.681589ms for default service account to be created ...
	I0229 02:36:27.940632  369508 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:36:27.947433  369508 system_pods.go:86] 8 kube-system pods found
	I0229 02:36:27.947455  369508 system_pods.go:89] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running
	I0229 02:36:27.947466  369508 system_pods.go:89] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running
	I0229 02:36:27.947472  369508 system_pods.go:89] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running
	I0229 02:36:27.947482  369508 system_pods.go:89] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running
	I0229 02:36:27.947491  369508 system_pods.go:89] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:36:27.947497  369508 system_pods.go:89] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running
	I0229 02:36:27.947508  369508 system_pods.go:89] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:36:27.947518  369508 system_pods.go:89] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:36:27.947531  369508 system_pods.go:126] duration metric: took 6.892538ms to wait for k8s-apps to be running ...
	I0229 02:36:27.947539  369508 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:36:27.947591  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:36:27.965730  369508 system_svc.go:56] duration metric: took 18.181663ms WaitForService to wait for kubelet.
	I0229 02:36:27.965756  369508 kubeadm.go:581] duration metric: took 4m25.215820473s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:36:27.965780  369508 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:36:27.970094  369508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:36:27.970123  369508 node_conditions.go:123] node cpu capacity is 2
	I0229 02:36:27.970138  369508 node_conditions.go:105] duration metric: took 4.347423ms to run NodePressure ...
	I0229 02:36:27.970152  369508 start.go:228] waiting for startup goroutines ...
	I0229 02:36:27.970162  369508 start.go:233] waiting for cluster config update ...
	I0229 02:36:27.970175  369508 start.go:242] writing updated cluster config ...
	I0229 02:36:27.970529  369508 ssh_runner.go:195] Run: rm -f paused
	I0229 02:36:28.020686  369508 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:36:28.022730  369508 out.go:177] * Done! kubectl is now configured to use "embed-certs-915633" cluster and "default" namespace by default
	I0229 02:36:25.585985  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:28.085278  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:26.954801  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:26.955093  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:30.583462  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:32.584198  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:34.585129  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:37.085551  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:39.584450  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:36.955344  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:36.955543  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:41.585000  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:44.083919  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:46.085694  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:48.583474  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:50.584026  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:53.084622  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:55.084729  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:57.084941  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:59.586329  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:56.957911  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:56.958178  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:02.085189  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:04.085672  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:06.586906  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:09.085130  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:11.583811  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:13.585179  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:16.083670  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:18.084884  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:20.584395  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:22.585487  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:24.586088  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:26.586608  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:29.084644  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:31.585292  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:34.083690  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:36.959509  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:37:36.959795  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:36.959812  370051 kubeadm.go:322] 
	I0229 02:37:36.959848  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:37:36.959887  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:37:36.959893  370051 kubeadm.go:322] 
	I0229 02:37:36.959937  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:37:36.959991  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:37:36.960142  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:37:36.960167  370051 kubeadm.go:322] 
	I0229 02:37:36.960282  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:37:36.960318  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:37:36.960362  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:37:36.960371  370051 kubeadm.go:322] 
	I0229 02:37:36.960482  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:37:36.960617  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:37:36.960756  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:37:36.960839  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:37:36.960951  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:37:36.961015  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:37:36.961366  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:37:36.961507  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:37:36.961616  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 02:37:36.961763  370051 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:37:36.961835  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:37:37.427665  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:37:37.443045  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:37:37.456937  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:37:37.456979  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:37:37.529093  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:37:37.529246  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:37:37.670260  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:37:37.670417  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:37:37.670548  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:37:37.904220  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:37:37.905569  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:37:37.914919  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:37:38.070911  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:37:38.072738  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:37:38.072860  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:37:38.072951  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:37:38.073049  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:37:38.073132  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:37:38.073230  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:37:38.073299  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:37:38.073376  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:37:38.073458  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:37:38.073566  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:37:38.073680  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:37:38.073720  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:37:38.073794  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:37:38.209805  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:37:38.305550  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:37:38.464715  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:37:38.623139  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:37:38.624364  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:37:36.084556  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:38.086561  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:38.625883  370051 out.go:204]   - Booting up control plane ...
	I0229 02:37:38.626039  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:37:38.630668  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:37:38.631740  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:37:38.632687  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:37:38.636043  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:37:40.583589  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:42.583968  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:44.584409  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:46.586413  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:49.084223  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:51.584770  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:53.584871  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:55.585299  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:58.084753  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:00.584432  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:03.085511  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:05.585519  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:08.085774  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:10.087984  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:12.584744  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:15.085757  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:17.584807  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:19.588130  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:18.637746  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:38:18.638616  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:18.638883  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:22.084442  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:24.085227  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:23.639374  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:23.639613  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:26.087774  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:28.584872  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:30.587375  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:33.085060  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:35.086106  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:33.640169  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:33.640468  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:37.584670  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:40.085797  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:42.585365  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:44.587079  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:46.590638  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:49.086500  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:51.584286  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:53.587405  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:53.640871  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:53.641147  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:56.084551  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:58.085668  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:00.086247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:02.588854  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:05.085163  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:07.090885  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:09.583687  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:11.585184  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:14.085800  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:16.086643  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:18.584073  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:21.084992  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:23.585496  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:25.586111  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:28.086464  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:33.642813  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:39:33.643083  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:39:33.643099  370051 kubeadm.go:322] 
	I0229 02:39:33.643153  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:39:33.643206  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:39:33.643213  370051 kubeadm.go:322] 
	I0229 02:39:33.643252  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:39:33.643296  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:39:33.643443  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:39:33.643455  370051 kubeadm.go:322] 
	I0229 02:39:33.643605  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:39:33.643655  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:39:33.643700  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:39:33.643714  370051 kubeadm.go:322] 
	I0229 02:39:33.643871  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:39:33.644040  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:39:33.644193  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:39:33.644272  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:39:33.644371  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:39:33.644412  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:39:33.644855  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:39:33.644972  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:39:33.645065  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:39:33.645132  370051 kubeadm.go:406] StartCluster complete in 8m8.138449101s
	I0229 02:39:33.645178  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:39:33.645255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:39:33.699121  370051 cri.go:89] found id: ""
	I0229 02:39:33.699154  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.699166  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:39:33.699174  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:39:33.699240  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:39:33.747229  370051 cri.go:89] found id: ""
	I0229 02:39:33.747260  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.747272  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:39:33.747279  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:39:33.747349  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:39:33.789303  370051 cri.go:89] found id: ""
	I0229 02:39:33.789334  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.789343  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:39:33.789350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:39:33.789413  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:39:33.832769  370051 cri.go:89] found id: ""
	I0229 02:39:33.832801  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.832814  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:39:33.832824  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:39:33.832891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:39:33.881508  370051 cri.go:89] found id: ""
	I0229 02:39:33.881543  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.881554  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:39:33.881571  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:39:33.881635  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:39:33.941691  370051 cri.go:89] found id: ""
	I0229 02:39:33.941728  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.941740  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:39:33.941749  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:39:33.941822  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:39:33.990639  370051 cri.go:89] found id: ""
	I0229 02:39:33.990681  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.990704  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:39:33.990713  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:39:33.990774  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:39:34.038426  370051 cri.go:89] found id: ""
	I0229 02:39:34.038460  370051 logs.go:276] 0 containers: []
	W0229 02:39:34.038470  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:39:34.038480  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:39:34.038497  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:39:34.054571  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:39:34.054604  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:39:34.131297  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:39:34.131323  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:39:34.131337  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:39:34.232302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:39:34.232349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:39:34.283314  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:39:34.283351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:39:34.336858  370051 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:39:34.336920  370051 out.go:239] * 
	W0229 02:39:34.336985  370051 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.337006  370051 out.go:239] * 
	W0229 02:39:34.337787  370051 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:39:34.340744  370051 out.go:177] 
	W0229 02:39:34.342096  370051 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.342137  370051 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:39:34.342160  370051 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:39:34.343540  370051 out.go:177] 
	I0229 02:39:30.584963  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:32.585599  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:34.588073  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:37.085513  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:39.584721  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:41.585072  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:44.086996  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:46.587437  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:49.083819  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:51.084472  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:53.085522  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:55.585518  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:58.084454  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:00.085075  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:02.588500  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:05.083707  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:07.084423  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:09.584552  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:11.590611  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:14.084618  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:16.597479  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:19.086312  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:21.586450  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:23.583798  369869 pod_ready.go:81] duration metric: took 4m0.007166298s waiting for pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace to be "Ready" ...
	E0229 02:40:23.583824  369869 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:40:23.583834  369869 pod_ready.go:38] duration metric: took 4m2.001316522s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:40:23.583860  369869 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:40:23.583899  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:23.584002  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:23.655958  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:23.655987  369869 cri.go:89] found id: ""
	I0229 02:40:23.655997  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:23.656057  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.661125  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:23.661199  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:23.712373  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:23.712400  369869 cri.go:89] found id: ""
	I0229 02:40:23.712410  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:23.712508  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.718149  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:23.718209  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:23.775835  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:23.775858  369869 cri.go:89] found id: ""
	I0229 02:40:23.775867  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:23.775923  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.780698  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:23.780792  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:23.825914  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:23.825939  369869 cri.go:89] found id: ""
	I0229 02:40:23.825949  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:23.826017  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.830870  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:23.830932  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:23.868737  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:23.868767  369869 cri.go:89] found id: ""
	I0229 02:40:23.868777  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:23.868841  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.873522  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:23.873598  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:23.918640  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:23.918663  369869 cri.go:89] found id: ""
	I0229 02:40:23.918671  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:23.918725  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.923456  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:23.923517  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:23.963045  369869 cri.go:89] found id: ""
	I0229 02:40:23.963071  369869 logs.go:276] 0 containers: []
	W0229 02:40:23.963080  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:23.963085  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:23.963136  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:24.006380  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:24.006402  369869 cri.go:89] found id: ""
	I0229 02:40:24.006409  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:24.006459  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:24.012228  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:24.012269  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:24.095110  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:24.095354  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:24.117199  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:24.117229  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:24.181064  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:24.181126  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:24.239267  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:24.239305  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:24.283248  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:24.283281  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:24.746786  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:24.746831  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:24.764451  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:24.764487  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:24.917582  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:24.917625  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:24.980095  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:24.980142  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:25.028219  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:25.028253  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:25.083840  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:25.083874  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:25.131148  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:25.131179  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:25.179314  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:25.179340  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:25.179415  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:25.179432  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:25.179455  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:25.179471  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:25.179479  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:35.181209  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:40:35.199982  369869 api_server.go:72] duration metric: took 4m15.785374734s to wait for apiserver process to appear ...
	I0229 02:40:35.200012  369869 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:40:35.200052  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:35.200109  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:35.241760  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:35.241786  369869 cri.go:89] found id: ""
	I0229 02:40:35.241795  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:35.241846  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.247188  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:35.247294  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:35.293992  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:35.294022  369869 cri.go:89] found id: ""
	I0229 02:40:35.294033  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:35.294098  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.298900  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:35.298971  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:35.340809  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:35.340835  369869 cri.go:89] found id: ""
	I0229 02:40:35.340843  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:35.340903  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.345913  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:35.345988  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:35.392027  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:35.392061  369869 cri.go:89] found id: ""
	I0229 02:40:35.392072  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:35.392140  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.397043  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:35.397120  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:35.452900  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:35.452931  369869 cri.go:89] found id: ""
	I0229 02:40:35.452942  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:35.453014  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.459221  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:35.459303  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:35.503530  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:35.503555  369869 cri.go:89] found id: ""
	I0229 02:40:35.503563  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:35.503615  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.509021  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:35.509083  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:35.553777  369869 cri.go:89] found id: ""
	I0229 02:40:35.553803  369869 logs.go:276] 0 containers: []
	W0229 02:40:35.553812  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:35.553818  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:35.553868  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:35.605234  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:35.605259  369869 cri.go:89] found id: ""
	I0229 02:40:35.605267  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:35.605333  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.610433  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:35.610465  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:36.030757  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:36.030807  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:36.047193  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:36.047224  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:36.105936  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:36.105983  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:36.169028  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:36.169080  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:36.241640  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:36.241678  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:36.284787  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:36.284822  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:36.333264  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:36.333293  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:36.385436  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:36.385468  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:36.463289  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:36.463491  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:36.485748  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:36.485782  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:36.604181  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:36.604218  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:36.659210  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:36.659247  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:36.704612  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:36.704640  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:36.704695  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:36.704706  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:36.704712  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:36.704719  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:36.704726  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:46.705868  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:40:46.711301  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 200:
	ok
	I0229 02:40:46.713000  369869 api_server.go:141] control plane version: v1.28.4
	I0229 02:40:46.713025  369869 api_server.go:131] duration metric: took 11.513005073s to wait for apiserver health ...
	I0229 02:40:46.713034  369869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:40:46.713061  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:46.713121  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:46.759486  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:46.759505  369869 cri.go:89] found id: ""
	I0229 02:40:46.759517  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:46.759581  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.764215  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:46.764299  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:46.805016  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:46.805042  369869 cri.go:89] found id: ""
	I0229 02:40:46.805049  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:46.805113  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.810213  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:46.810284  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:46.862825  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:46.862855  369869 cri.go:89] found id: ""
	I0229 02:40:46.862867  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:46.862923  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.867531  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:46.867588  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:46.914211  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:46.914247  369869 cri.go:89] found id: ""
	I0229 02:40:46.914258  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:46.914327  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.918857  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:46.918921  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:46.959981  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:46.960016  369869 cri.go:89] found id: ""
	I0229 02:40:46.960027  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:46.960095  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.964789  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:46.964846  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:47.009289  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:47.009313  369869 cri.go:89] found id: ""
	I0229 02:40:47.009322  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:47.009390  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:47.015339  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:47.015413  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:47.059195  369869 cri.go:89] found id: ""
	I0229 02:40:47.059227  369869 logs.go:276] 0 containers: []
	W0229 02:40:47.059239  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:47.059254  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:47.059306  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:47.103293  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:47.103323  369869 cri.go:89] found id: ""
	I0229 02:40:47.103334  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:47.103401  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:47.108048  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:47.108076  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:47.157407  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:47.157441  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:47.591202  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:47.591261  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:47.644877  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:47.644910  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:47.784217  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:47.784249  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:47.839113  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:47.839144  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:47.885581  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:47.885616  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:47.930971  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:47.931009  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:47.986352  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:47.986437  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:48.067103  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:48.067316  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:48.088373  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:48.088407  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:48.105750  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:48.105781  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:48.154640  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:48.154677  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:48.196009  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:48.196042  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:48.196112  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:48.196128  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:48.196137  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:48.196146  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:48.196155  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:58.203822  369869 system_pods.go:59] 8 kube-system pods found
	I0229 02:40:58.203853  369869 system_pods.go:61] "coredns-5dd5756b68-xj4sh" [e2741c05-81b2-4de6-8329-f88912d48160] Running
	I0229 02:40:58.203859  369869 system_pods.go:61] "etcd-default-k8s-diff-port-071485" [88b0e865-c53e-4829-a56a-2a3b6e405df4] Running
	I0229 02:40:58.203866  369869 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071485" [445fa1c9-589b-437d-92ca-0d15ee8228ae] Running
	I0229 02:40:58.203872  369869 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071485" [e3f60cdb-6214-4987-b692-a4921ece3895] Running
	I0229 02:40:58.203877  369869 system_pods.go:61] "kube-proxy-gr44w" [a74b553f-683a-4e1b-ac48-b4553d00b306] Running
	I0229 02:40:58.203881  369869 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071485" [4c1afe85-10be-45e5-8b99-6bd3cf12a828] Running
	I0229 02:40:58.203888  369869 system_pods.go:61] "metrics-server-57f55c9bc5-fpwzl" [5215d27e-4bf2-4331-89f2-24096dc96b90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:40:58.203893  369869 system_pods.go:61] "storage-provisioner" [d7b70f8e-1689-4526-a39f-eb8005cbecd2] Running
	I0229 02:40:58.203902  369869 system_pods.go:74] duration metric: took 11.49086169s to wait for pod list to return data ...
	I0229 02:40:58.203913  369869 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:40:58.207120  369869 default_sa.go:45] found service account: "default"
	I0229 02:40:58.207145  369869 default_sa.go:55] duration metric: took 3.22533ms for default service account to be created ...
	I0229 02:40:58.207154  369869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:40:58.213026  369869 system_pods.go:86] 8 kube-system pods found
	I0229 02:40:58.213056  369869 system_pods.go:89] "coredns-5dd5756b68-xj4sh" [e2741c05-81b2-4de6-8329-f88912d48160] Running
	I0229 02:40:58.213065  369869 system_pods.go:89] "etcd-default-k8s-diff-port-071485" [88b0e865-c53e-4829-a56a-2a3b6e405df4] Running
	I0229 02:40:58.213073  369869 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071485" [445fa1c9-589b-437d-92ca-0d15ee8228ae] Running
	I0229 02:40:58.213081  369869 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071485" [e3f60cdb-6214-4987-b692-a4921ece3895] Running
	I0229 02:40:58.213088  369869 system_pods.go:89] "kube-proxy-gr44w" [a74b553f-683a-4e1b-ac48-b4553d00b306] Running
	I0229 02:40:58.213094  369869 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071485" [4c1afe85-10be-45e5-8b99-6bd3cf12a828] Running
	I0229 02:40:58.213107  369869 system_pods.go:89] "metrics-server-57f55c9bc5-fpwzl" [5215d27e-4bf2-4331-89f2-24096dc96b90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:40:58.213117  369869 system_pods.go:89] "storage-provisioner" [d7b70f8e-1689-4526-a39f-eb8005cbecd2] Running
	I0229 02:40:58.213130  369869 system_pods.go:126] duration metric: took 5.970128ms to wait for k8s-apps to be running ...
	I0229 02:40:58.213142  369869 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:40:58.213204  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:40:58.230150  369869 system_svc.go:56] duration metric: took 16.998299ms WaitForService to wait for kubelet.
	I0229 02:40:58.230178  369869 kubeadm.go:581] duration metric: took 4m38.815578079s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:40:58.230245  369869 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:40:58.233660  369869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:40:58.233719  369869 node_conditions.go:123] node cpu capacity is 2
	I0229 02:40:58.233737  369869 node_conditions.go:105] duration metric: took 3.486117ms to run NodePressure ...
	I0229 02:40:58.233756  369869 start.go:228] waiting for startup goroutines ...
	I0229 02:40:58.233766  369869 start.go:233] waiting for cluster config update ...
	I0229 02:40:58.233777  369869 start.go:242] writing updated cluster config ...
	I0229 02:40:58.234079  369869 ssh_runner.go:195] Run: rm -f paused
	I0229 02:40:58.285415  369869 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:40:58.287433  369869 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-071485" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.224401128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174730224376462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3103e91-abaf-4169-a8e3-a0ef5c8c0997 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.225473539Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29f726f5-faed-4be5-b2c0-8c7671e7ba04 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.225555564Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29f726f5-faed-4be5-b2c0-8c7671e7ba04 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.225849261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173950902550234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8e99e2123d2e5303af936d009927f675a0330fa1d562d04d91c9671e72447a,PodSandboxId:aa7a19621db15a31f3aa5741180f7d09a6558bcc6010fa4af6e04ceaf75df77c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173931049012093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d069c34-3c34-4c30-8698-681e749d7fa4,},Annotations:map[string]string{io.kubernetes.container.hash: 831f7d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c,PodSandboxId:f2c20e2d5e60bf3c023423ccddc6b75295a3b089f89c8ad85ca9b1902c9d2f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709173927679140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kt28m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf7edc3-f4db-4d5e-ad63-ccbec64dfac4,},Annotations:map[string]string{io.kubernetes.container.hash: afa5b1b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb,PodSandboxId:a6fa5f96ebc2b6bfc6a42ad60ce69b9cf970592fa8affcdf705599b5d48cb1e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709173920277568195,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tt7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e8eb713-a0cf-49f3-b93
d-7493a9d763ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6877a072,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173920158436585,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051
a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d,PodSandboxId:06b1c1143ab74d6ef4e77750f790d1cd89c4c65439fa46ec5c5af993e444686f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709173916416971661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc78ffe0316f227b9b3d46d2ef42ba84,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121,PodSandboxId:aea9eb46829490f648972ab7e94364c7a87dd955b384c49407b4e4e2173ac9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709173916342842598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3606a01af513b0463e4c406da47efcb1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1457bf1e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa,PodSandboxId:35963b598dc6746414efd7f05f463a13fad12a5d48a4911a670ad20ab49f5dfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709173916378486179,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef27af45952a1a1a128b1cf3b7799f57,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226,PodSandboxId:d508fb8be975e1491a80508cab4e25dd1cbfd71f0385f51d254beada0cdf62c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709173916289625855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b65843d5609ea16863ebad71b39fd309,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: bf6905e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29f726f5-faed-4be5-b2c0-8c7671e7ba04 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.269779946Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c3e0917-9c01-461a-9d10-4508779544b7 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.269857275Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c3e0917-9c01-461a-9d10-4508779544b7 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.270932837Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29b77b30-a92d-4135-b56f-7eb6b656ec94 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.271318848Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174730271300291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29b77b30-a92d-4135-b56f-7eb6b656ec94 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.272021767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e647aae5-65f5-40cd-b653-fca785b24e11 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.272072271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e647aae5-65f5-40cd-b653-fca785b24e11 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.272276208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173950902550234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8e99e2123d2e5303af936d009927f675a0330fa1d562d04d91c9671e72447a,PodSandboxId:aa7a19621db15a31f3aa5741180f7d09a6558bcc6010fa4af6e04ceaf75df77c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173931049012093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d069c34-3c34-4c30-8698-681e749d7fa4,},Annotations:map[string]string{io.kubernetes.container.hash: 831f7d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c,PodSandboxId:f2c20e2d5e60bf3c023423ccddc6b75295a3b089f89c8ad85ca9b1902c9d2f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709173927679140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kt28m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf7edc3-f4db-4d5e-ad63-ccbec64dfac4,},Annotations:map[string]string{io.kubernetes.container.hash: afa5b1b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb,PodSandboxId:a6fa5f96ebc2b6bfc6a42ad60ce69b9cf970592fa8affcdf705599b5d48cb1e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709173920277568195,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tt7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e8eb713-a0cf-49f3-b93
d-7493a9d763ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6877a072,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173920158436585,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051
a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d,PodSandboxId:06b1c1143ab74d6ef4e77750f790d1cd89c4c65439fa46ec5c5af993e444686f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709173916416971661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc78ffe0316f227b9b3d46d2ef42ba84,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121,PodSandboxId:aea9eb46829490f648972ab7e94364c7a87dd955b384c49407b4e4e2173ac9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709173916342842598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3606a01af513b0463e4c406da47efcb1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1457bf1e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa,PodSandboxId:35963b598dc6746414efd7f05f463a13fad12a5d48a4911a670ad20ab49f5dfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709173916378486179,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef27af45952a1a1a128b1cf3b7799f57,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226,PodSandboxId:d508fb8be975e1491a80508cab4e25dd1cbfd71f0385f51d254beada0cdf62c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709173916289625855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b65843d5609ea16863ebad71b39fd309,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: bf6905e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e647aae5-65f5-40cd-b653-fca785b24e11 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.317045009Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07b280e1-23f2-4bce-a04c-4630518ac749 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.317117504Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07b280e1-23f2-4bce-a04c-4630518ac749 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.318157420Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f05a2a23-7fd5-4c29-83e4-7391f01ae922 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.318547599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174730318520452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f05a2a23-7fd5-4c29-83e4-7391f01ae922 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.319235840Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e9ec0b6-f3d0-454e-a903-96139880f891 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.319290270Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e9ec0b6-f3d0-454e-a903-96139880f891 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.319490056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173950902550234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8e99e2123d2e5303af936d009927f675a0330fa1d562d04d91c9671e72447a,PodSandboxId:aa7a19621db15a31f3aa5741180f7d09a6558bcc6010fa4af6e04ceaf75df77c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173931049012093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d069c34-3c34-4c30-8698-681e749d7fa4,},Annotations:map[string]string{io.kubernetes.container.hash: 831f7d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c,PodSandboxId:f2c20e2d5e60bf3c023423ccddc6b75295a3b089f89c8ad85ca9b1902c9d2f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709173927679140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kt28m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf7edc3-f4db-4d5e-ad63-ccbec64dfac4,},Annotations:map[string]string{io.kubernetes.container.hash: afa5b1b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb,PodSandboxId:a6fa5f96ebc2b6bfc6a42ad60ce69b9cf970592fa8affcdf705599b5d48cb1e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709173920277568195,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tt7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e8eb713-a0cf-49f3-b93
d-7493a9d763ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6877a072,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173920158436585,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051
a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d,PodSandboxId:06b1c1143ab74d6ef4e77750f790d1cd89c4c65439fa46ec5c5af993e444686f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709173916416971661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc78ffe0316f227b9b3d46d2ef42ba84,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121,PodSandboxId:aea9eb46829490f648972ab7e94364c7a87dd955b384c49407b4e4e2173ac9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709173916342842598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3606a01af513b0463e4c406da47efcb1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1457bf1e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa,PodSandboxId:35963b598dc6746414efd7f05f463a13fad12a5d48a4911a670ad20ab49f5dfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709173916378486179,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef27af45952a1a1a128b1cf3b7799f57,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226,PodSandboxId:d508fb8be975e1491a80508cab4e25dd1cbfd71f0385f51d254beada0cdf62c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709173916289625855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b65843d5609ea16863ebad71b39fd309,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: bf6905e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e9ec0b6-f3d0-454e-a903-96139880f891 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.358083290Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c4c0db8-78d9-49fc-b308-4b0664d91bc6 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.358181724Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c4c0db8-78d9-49fc-b308-4b0664d91bc6 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.360425108Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10fdfd8f-23ed-4a14-a764-ae6d8aeb8a73 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.361219856Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174730361038839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10fdfd8f-23ed-4a14-a764-ae6d8aeb8a73 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.362343816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21747e32-18b8-4b5c-b759-0dd58d5d493d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.362399859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21747e32-18b8-4b5c-b759-0dd58d5d493d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:45:30 embed-certs-915633 crio[680]: time="2024-02-29 02:45:30.362623534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173950902550234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8e99e2123d2e5303af936d009927f675a0330fa1d562d04d91c9671e72447a,PodSandboxId:aa7a19621db15a31f3aa5741180f7d09a6558bcc6010fa4af6e04ceaf75df77c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173931049012093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d069c34-3c34-4c30-8698-681e749d7fa4,},Annotations:map[string]string{io.kubernetes.container.hash: 831f7d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c,PodSandboxId:f2c20e2d5e60bf3c023423ccddc6b75295a3b089f89c8ad85ca9b1902c9d2f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709173927679140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kt28m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf7edc3-f4db-4d5e-ad63-ccbec64dfac4,},Annotations:map[string]string{io.kubernetes.container.hash: afa5b1b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb,PodSandboxId:a6fa5f96ebc2b6bfc6a42ad60ce69b9cf970592fa8affcdf705599b5d48cb1e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709173920277568195,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tt7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e8eb713-a0cf-49f3-b93
d-7493a9d763ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6877a072,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173920158436585,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051
a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d,PodSandboxId:06b1c1143ab74d6ef4e77750f790d1cd89c4c65439fa46ec5c5af993e444686f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709173916416971661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc78ffe0316f227b9b3d46d2ef42ba84,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121,PodSandboxId:aea9eb46829490f648972ab7e94364c7a87dd955b384c49407b4e4e2173ac9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709173916342842598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3606a01af513b0463e4c406da47efcb1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1457bf1e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa,PodSandboxId:35963b598dc6746414efd7f05f463a13fad12a5d48a4911a670ad20ab49f5dfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709173916378486179,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef27af45952a1a1a128b1cf3b7799f57,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226,PodSandboxId:d508fb8be975e1491a80508cab4e25dd1cbfd71f0385f51d254beada0cdf62c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709173916289625855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b65843d5609ea16863ebad71b39fd309,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: bf6905e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21747e32-18b8-4b5c-b759-0dd58d5d493d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5d03e33e30323       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   326c6ad728613       storage-provisioner
	6d8e99e2123d2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   aa7a19621db15       busybox
	6f79a4150c635       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   f2c20e2d5e60b       coredns-5dd5756b68-kt28m
	8f95a3a0ad6f6       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   a6fa5f96ebc2b       kube-proxy-6tt7l
	4d79154ed71a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   326c6ad728613       storage-provisioner
	57de9d45eaff6       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   06b1c1143ab74       kube-scheduler-embed-certs-915633
	8fcb33bb23e69       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   35963b598dc67       kube-controller-manager-embed-certs-915633
	208354e254f6c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   aea9eb4682949       etcd-embed-certs-915633
	74bd751559a70       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   d508fb8be975e       kube-apiserver-embed-certs-915633
	
	
	==> coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60229 - 57940 "HINFO IN 7530651228205597472.1671966392532887046. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014516991s
	
	
	==> describe nodes <==
	Name:               embed-certs-915633
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-915633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=embed-certs-915633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_22_07_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:22:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-915633
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:45:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:42:41 +0000   Thu, 29 Feb 2024 02:22:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:42:41 +0000   Thu, 29 Feb 2024 02:22:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:42:41 +0000   Thu, 29 Feb 2024 02:22:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:42:41 +0000   Thu, 29 Feb 2024 02:32:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.218
	  Hostname:    embed-certs-915633
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 275405a572ea4acf891f83ae3176f9fd
	  System UUID:                275405a5-72ea-4acf-891f-83ae3176f9fd
	  Boot ID:                    b5f53730-80e9-46cc-8959-1f6a4a8b85e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-5dd5756b68-kt28m                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-embed-certs-915633                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kube-apiserver-embed-certs-915633             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-embed-certs-915633    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-6tt7l                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-embed-certs-915633             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-57f55c9bc5-6p7f7               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node embed-certs-915633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node embed-certs-915633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node embed-certs-915633 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     23m                kubelet          Node embed-certs-915633 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node embed-certs-915633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node embed-certs-915633 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeReady                23m                kubelet          Node embed-certs-915633 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node embed-certs-915633 event: Registered Node embed-certs-915633 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-915633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-915633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-915633 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-915633 event: Registered Node embed-certs-915633 in Controller
	
	
	==> dmesg <==
	[Feb29 02:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.065405] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046527] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.122199] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.454328] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.766420] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.892754] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.063492] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.090998] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.197923] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.149536] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.255637] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[ +17.538360] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.066808] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.044177] kauditd_printk_skb: 84 callbacks suppressed
	[Feb29 02:32] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.425526] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] <==
	{"level":"info","ts":"2024-02-29T02:31:56.951049Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"db562ccfd877cf13","local-member-id":"d4bfeef2bb38c2b5","added-peer-id":"d4bfeef2bb38c2b5","added-peer-peer-urls":["https://192.168.50.218:2380"]}
	{"level":"info","ts":"2024-02-29T02:31:56.951145Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"db562ccfd877cf13","local-member-id":"d4bfeef2bb38c2b5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:31:56.95117Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:31:56.961594Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T02:31:56.961844Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d4bfeef2bb38c2b5","initial-advertise-peer-urls":["https://192.168.50.218:2380"],"listen-peer-urls":["https://192.168.50.218:2380"],"advertise-client-urls":["https://192.168.50.218:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.218:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T02:31:56.961897Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T02:31:56.96487Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.218:2380"}
	{"level":"info","ts":"2024-02-29T02:31:56.965044Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.218:2380"}
	{"level":"info","ts":"2024-02-29T02:31:57.907167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4bfeef2bb38c2b5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T02:31:57.907265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4bfeef2bb38c2b5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:31:57.907298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4bfeef2bb38c2b5 received MsgPreVoteResp from d4bfeef2bb38c2b5 at term 2"}
	{"level":"info","ts":"2024-02-29T02:31:57.907327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4bfeef2bb38c2b5 became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:57.907351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4bfeef2bb38c2b5 received MsgVoteResp from d4bfeef2bb38c2b5 at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:57.907378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4bfeef2bb38c2b5 became leader at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:57.907403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4bfeef2bb38c2b5 elected leader d4bfeef2bb38c2b5 at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:57.910974Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d4bfeef2bb38c2b5","local-member-attributes":"{Name:embed-certs-915633 ClientURLs:[https://192.168.50.218:2379]}","request-path":"/0/members/d4bfeef2bb38c2b5/attributes","cluster-id":"db562ccfd877cf13","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:31:57.911167Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:31:57.911778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:31:57.912239Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.218:2379"}
	{"level":"info","ts":"2024-02-29T02:31:57.912649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:31:57.91276Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:31:57.912812Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:41:57.948612Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":864}
	{"level":"info","ts":"2024-02-29T02:41:57.951571Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":864,"took":"2.521412ms","hash":3716348948}
	{"level":"info","ts":"2024-02-29T02:41:57.951654Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3716348948,"revision":864,"compact-revision":-1}
	
	
	==> kernel <==
	 02:45:30 up 14 min,  0 users,  load average: 0.11, 0.19, 0.17
	Linux embed-certs-915633 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] <==
	I0229 02:41:59.634959       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 02:42:00.635448       1 handler_proxy.go:93] no RequestInfo found in the context
	W0229 02:42:00.635473       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:42:00.635641       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:42:00.635726       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0229 02:42:00.635791       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:42:00.637069       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 02:42:59.484375       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 02:43:00.636489       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:43:00.636569       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:43:00.636642       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:43:00.637934       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:43:00.638074       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:43:00.638119       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 02:43:59.484453       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 02:44:59.484984       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 02:45:00.637428       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:45:00.637541       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:45:00.637571       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:45:00.638743       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:45:00.638846       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:45:00.638878       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] <==
	I0229 02:39:43.280085       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:40:12.678284       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:40:13.288962       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:40:42.682782       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:40:43.297606       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:41:12.688848       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:41:13.306191       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:41:42.695224       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:41:43.317346       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:42:12.704162       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:42:13.328525       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:42:42.709511       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:42:43.338302       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 02:43:11.679587       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="336.018µs"
	E0229 02:43:12.717516       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:43:13.347087       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 02:43:22.670741       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="185.461µs"
	E0229 02:43:42.723779       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:43:43.355300       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:44:12.741652       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:44:13.368033       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:44:42.748137       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:44:43.376105       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:45:12.754859       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:45:13.384200       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] <==
	I0229 02:32:00.463615       1 server_others.go:69] "Using iptables proxy"
	I0229 02:32:00.474343       1 node.go:141] Successfully retrieved node IP: 192.168.50.218
	I0229 02:32:00.602762       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 02:32:00.602835       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:32:00.605346       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:32:00.605424       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:32:00.605614       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:32:00.606327       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:32:00.607976       1 config.go:188] "Starting service config controller"
	I0229 02:32:00.608058       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:32:00.608620       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:32:00.608863       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:32:00.609470       1 config.go:315] "Starting node config controller"
	I0229 02:32:00.609600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:32:00.710421       1 shared_informer.go:318] Caches are synced for service config
	I0229 02:32:00.713597       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:32:00.713783       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] <==
	I0229 02:31:57.557919       1 serving.go:348] Generated self-signed cert in-memory
	W0229 02:31:59.578096       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 02:31:59.578186       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 02:31:59.578214       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 02:31:59.578238       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 02:31:59.636838       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 02:31:59.636999       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:31:59.641452       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 02:31:59.643792       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 02:31:59.643911       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 02:31:59.643963       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:31:59.745045       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 02:42:57 embed-certs-915633 kubelet[890]: E0229 02:42:57.689850     890 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 29 02:42:57 embed-certs-915633 kubelet[890]: E0229 02:42:57.690100     890 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nsbpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-6p7f7_kube-system(b1dc8143-2d47-4cea-b4a1-61808350d2d6): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 29 02:42:57 embed-certs-915633 kubelet[890]: E0229 02:42:57.690164     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:43:11 embed-certs-915633 kubelet[890]: E0229 02:43:11.654366     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:43:22 embed-certs-915633 kubelet[890]: E0229 02:43:22.653566     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:43:37 embed-certs-915633 kubelet[890]: E0229 02:43:37.653579     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:43:48 embed-certs-915633 kubelet[890]: E0229 02:43:48.652936     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:43:55 embed-certs-915633 kubelet[890]: E0229 02:43:55.677573     890 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:43:55 embed-certs-915633 kubelet[890]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:43:55 embed-certs-915633 kubelet[890]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:43:55 embed-certs-915633 kubelet[890]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:43:55 embed-certs-915633 kubelet[890]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:44:01 embed-certs-915633 kubelet[890]: E0229 02:44:01.661085     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:44:12 embed-certs-915633 kubelet[890]: E0229 02:44:12.654243     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:44:23 embed-certs-915633 kubelet[890]: E0229 02:44:23.653601     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:44:38 embed-certs-915633 kubelet[890]: E0229 02:44:38.654807     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:44:49 embed-certs-915633 kubelet[890]: E0229 02:44:49.652841     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:44:55 embed-certs-915633 kubelet[890]: E0229 02:44:55.676851     890 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:44:55 embed-certs-915633 kubelet[890]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:44:55 embed-certs-915633 kubelet[890]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:44:55 embed-certs-915633 kubelet[890]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:44:55 embed-certs-915633 kubelet[890]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:45:01 embed-certs-915633 kubelet[890]: E0229 02:45:01.655255     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:45:12 embed-certs-915633 kubelet[890]: E0229 02:45:12.653514     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:45:24 embed-certs-915633 kubelet[890]: E0229 02:45:24.653598     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	
	
	==> storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] <==
	I0229 02:32:00.318485       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0229 02:32:30.326383       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] <==
	I0229 02:32:31.006877       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 02:32:31.028038       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 02:32:31.028317       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 02:32:48.433885       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 02:32:48.434575       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b8069b2-6063-4200-a8cc-5f7225a45a09", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-915633_1901a488-ed44-4337-b3ac-01ae11fb0d43 became leader
	I0229 02:32:48.434917       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-915633_1901a488-ed44-4337-b3ac-01ae11fb0d43!
	I0229 02:32:48.535534       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-915633_1901a488-ed44-4337-b3ac-01ae11fb0d43!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-915633 -n embed-certs-915633
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-915633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-6p7f7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-915633 describe pod metrics-server-57f55c9bc5-6p7f7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-915633 describe pod metrics-server-57f55c9bc5-6p7f7: exit status 1 (65.603824ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-6p7f7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-915633 describe pod metrics-server-57f55c9bc5-6p7f7: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:40:21.895134  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:40:27.859241  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:40:41.729332  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:41:00.868373  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:41:36.515449  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:41:44.941982  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:41:50.904239  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:42:04.062044  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:42:59.560140  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:43:10.388521  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:44:09.040193  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:44:18.684020  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:44:37.822773  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:44:37.824891  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:45:21.894632  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:45:27.858629  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:46:36.515115  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:47:04.062379  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:47:40.882189  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:48:10.388065  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275488 -n old-k8s-version-275488
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275488 -n old-k8s-version-275488: exit status 2 (263.093734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-275488" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488: exit status 2 (262.409257ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-275488 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-275488 logs -n 25: (1.684885967s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-117441 sudo cat                              | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo find                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo crio                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-117441                                       | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	| delete  | -p                                                     | disable-driver-mounts-542968 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | disable-driver-mounts-542968                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:23 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-915633            | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247751             | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071485  | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275488        | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-915633                 | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247751                  | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071485       | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:40 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275488             | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:26:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:26:36.132854  370051 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:26:36.133389  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133407  370051 out.go:304] Setting ErrFile to fd 2...
	I0229 02:26:36.133414  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133912  370051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:26:36.134959  370051 out.go:298] Setting JSON to false
	I0229 02:26:36.135907  370051 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7739,"bootTime":1709165857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:26:36.135982  370051 start.go:139] virtualization: kvm guest
	I0229 02:26:36.137916  370051 out.go:177] * [old-k8s-version-275488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:26:36.139510  370051 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:26:36.139543  370051 notify.go:220] Checking for updates...
	I0229 02:26:36.141206  370051 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:26:36.142776  370051 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:26:36.143982  370051 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:26:36.145097  370051 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:26:36.146170  370051 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:26:36.147751  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:26:36.148198  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.148298  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.163969  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0229 02:26:36.164373  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.164977  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.165003  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.165394  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.165584  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.167312  370051 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 02:26:36.168337  370051 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:26:36.168641  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.168683  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.184089  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0229 02:26:36.184605  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.185181  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.185210  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.185551  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.185723  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.222261  370051 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 02:26:36.223363  370051 start.go:299] selected driver: kvm2
	I0229 02:26:36.223374  370051 start.go:903] validating driver "kvm2" against &{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.223487  370051 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:26:36.224130  370051 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.224195  370051 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:26:36.239302  370051 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:26:36.239664  370051 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:26:36.239741  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:26:36.239755  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:26:36.239765  370051 start_flags.go:323] config:
	{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.239908  370051 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.241466  370051 out.go:177] * Starting control plane node old-k8s-version-275488 in cluster old-k8s-version-275488
	I0229 02:26:35.666509  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:38.738602  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:36.242536  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:26:36.242564  370051 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 02:26:36.242573  370051 cache.go:56] Caching tarball of preloaded images
	I0229 02:26:36.242641  370051 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 02:26:36.242651  370051 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 02:26:36.242742  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:26:36.242905  370051 start.go:365] acquiring machines lock for old-k8s-version-275488: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:26:44.818494  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:47.890482  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:53.970508  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:57.042448  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:03.122506  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:06.194415  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:12.274520  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:15.346558  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:21.426515  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:24.498557  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:30.578502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:33.650482  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:39.730548  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:42.802507  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:48.882487  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:51.954507  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:58.034498  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:01.106530  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:07.186513  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:10.258485  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:16.338519  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:19.410521  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:25.490436  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:28.562555  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:34.642534  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:37.714514  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:43.794519  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:46.866487  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:52.946514  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:56.018488  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:02.098512  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:05.170472  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:11.250485  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:14.322454  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:20.402450  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:23.474533  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:29.554541  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:32.626489  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:38.706558  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:41.778502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:47.858493  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:50.930489  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:57.010541  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:00.082537  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:06.162498  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:09.234502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:12.238620  369591 start.go:369] acquired machines lock for "no-preload-247751" in 4m33.303501223s
	I0229 02:30:12.238705  369591 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:12.238716  369591 fix.go:54] fixHost starting: 
	I0229 02:30:12.239171  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:12.239240  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:12.254984  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I0229 02:30:12.255490  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:12.255991  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:30:12.256012  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:12.256463  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:12.256668  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:12.256840  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:30:12.258341  369591 fix.go:102] recreateIfNeeded on no-preload-247751: state=Stopped err=<nil>
	I0229 02:30:12.258371  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	W0229 02:30:12.258522  369591 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:12.260176  369591 out.go:177] * Restarting existing kvm2 VM for "no-preload-247751" ...
	I0229 02:30:12.261521  369591 main.go:141] libmachine: (no-preload-247751) Calling .Start
	I0229 02:30:12.261678  369591 main.go:141] libmachine: (no-preload-247751) Ensuring networks are active...
	I0229 02:30:12.262375  369591 main.go:141] libmachine: (no-preload-247751) Ensuring network default is active
	I0229 02:30:12.262642  369591 main.go:141] libmachine: (no-preload-247751) Ensuring network mk-no-preload-247751 is active
	I0229 02:30:12.262962  369591 main.go:141] libmachine: (no-preload-247751) Getting domain xml...
	I0229 02:30:12.263526  369591 main.go:141] libmachine: (no-preload-247751) Creating domain...
	I0229 02:30:13.474816  369591 main.go:141] libmachine: (no-preload-247751) Waiting to get IP...
	I0229 02:30:13.475810  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:13.476251  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:13.476305  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:13.476230  370599 retry.go:31] will retry after 302.404435ms: waiting for machine to come up
	I0229 02:30:13.780776  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:13.781237  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:13.781265  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:13.781193  370599 retry.go:31] will retry after 364.673363ms: waiting for machine to come up
	I0229 02:30:12.236310  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:12.236352  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:30:12.238426  369508 machine.go:91] provisioned docker machine in 4m37.406828317s
	I0229 02:30:12.238513  369508 fix.go:56] fixHost completed within 4m37.429140371s
	I0229 02:30:12.238526  369508 start.go:83] releasing machines lock for "embed-certs-915633", held for 4m37.429164063s
	W0229 02:30:12.238553  369508 start.go:694] error starting host: provision: host is not running
	W0229 02:30:12.238763  369508 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0229 02:30:12.238784  369508 start.go:709] Will try again in 5 seconds ...
	I0229 02:30:14.148040  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:14.148530  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:14.148561  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:14.148471  370599 retry.go:31] will retry after 430.606986ms: waiting for machine to come up
	I0229 02:30:14.581180  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:14.581649  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:14.581679  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:14.581598  370599 retry.go:31] will retry after 557.726488ms: waiting for machine to come up
	I0229 02:30:15.141289  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:15.141736  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:15.141767  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:15.141675  370599 retry.go:31] will retry after 611.257074ms: waiting for machine to come up
	I0229 02:30:15.754464  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:15.754802  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:15.754831  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:15.754752  370599 retry.go:31] will retry after 905.484801ms: waiting for machine to come up
	I0229 02:30:16.661691  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:16.662072  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:16.662099  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:16.662020  370599 retry.go:31] will retry after 1.007584217s: waiting for machine to come up
	I0229 02:30:17.671565  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:17.672118  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:17.672159  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:17.672048  370599 retry.go:31] will retry after 933.310317ms: waiting for machine to come up
	I0229 02:30:18.607108  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:18.607473  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:18.607496  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:18.607426  370599 retry.go:31] will retry after 1.135856775s: waiting for machine to come up
	I0229 02:30:17.239210  369508 start.go:365] acquiring machines lock for embed-certs-915633: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:30:19.744656  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:19.745017  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:19.745047  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:19.744969  370599 retry.go:31] will retry after 2.184552748s: waiting for machine to come up
	I0229 02:30:21.932313  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:21.932764  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:21.932794  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:21.932711  370599 retry.go:31] will retry after 2.256573009s: waiting for machine to come up
	I0229 02:30:24.191551  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:24.191987  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:24.192016  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:24.191948  370599 retry.go:31] will retry after 3.0850751s: waiting for machine to come up
	I0229 02:30:27.278526  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:27.278941  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:27.278977  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:27.278914  370599 retry.go:31] will retry after 3.196492358s: waiting for machine to come up
	I0229 02:30:31.627482  369869 start.go:369] acquired machines lock for "default-k8s-diff-port-071485" in 4m6.129938439s
	I0229 02:30:31.627553  369869 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:31.627561  369869 fix.go:54] fixHost starting: 
	I0229 02:30:31.628005  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:31.628052  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:31.645217  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I0229 02:30:31.645607  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:31.646146  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:30:31.646179  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:31.646526  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:31.646754  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:31.646941  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:30:31.648372  369869 fix.go:102] recreateIfNeeded on default-k8s-diff-port-071485: state=Stopped err=<nil>
	I0229 02:30:31.648410  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	W0229 02:30:31.648603  369869 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:31.650778  369869 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-071485" ...
	I0229 02:30:30.479186  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.479664  369591 main.go:141] libmachine: (no-preload-247751) Found IP for machine: 192.168.72.114
	I0229 02:30:30.479694  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has current primary IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.479705  369591 main.go:141] libmachine: (no-preload-247751) Reserving static IP address...
	I0229 02:30:30.480161  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "no-preload-247751", mac: "52:54:00:fa:c1:ec", ip: "192.168.72.114"} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.480199  369591 main.go:141] libmachine: (no-preload-247751) DBG | skip adding static IP to network mk-no-preload-247751 - found existing host DHCP lease matching {name: "no-preload-247751", mac: "52:54:00:fa:c1:ec", ip: "192.168.72.114"}
	I0229 02:30:30.480213  369591 main.go:141] libmachine: (no-preload-247751) Reserved static IP address: 192.168.72.114
	I0229 02:30:30.480233  369591 main.go:141] libmachine: (no-preload-247751) Waiting for SSH to be available...
	I0229 02:30:30.480246  369591 main.go:141] libmachine: (no-preload-247751) DBG | Getting to WaitForSSH function...
	I0229 02:30:30.482557  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.482907  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.482935  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.483110  369591 main.go:141] libmachine: (no-preload-247751) DBG | Using SSH client type: external
	I0229 02:30:30.483136  369591 main.go:141] libmachine: (no-preload-247751) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa (-rw-------)
	I0229 02:30:30.483166  369591 main.go:141] libmachine: (no-preload-247751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:30:30.483180  369591 main.go:141] libmachine: (no-preload-247751) DBG | About to run SSH command:
	I0229 02:30:30.483197  369591 main.go:141] libmachine: (no-preload-247751) DBG | exit 0
	I0229 02:30:30.610329  369591 main.go:141] libmachine: (no-preload-247751) DBG | SSH cmd err, output: <nil>: 
	I0229 02:30:30.610691  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetConfigRaw
	I0229 02:30:30.611393  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:30.614007  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.614393  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.614426  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.614689  369591 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/config.json ...
	I0229 02:30:30.614872  369591 machine.go:88] provisioning docker machine ...
	I0229 02:30:30.614892  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:30.615096  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.615250  369591 buildroot.go:166] provisioning hostname "no-preload-247751"
	I0229 02:30:30.615272  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.615444  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.617525  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.617800  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.617835  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.617898  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.618095  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.618289  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.618424  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.618564  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:30.618790  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:30.618807  369591 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-247751 && echo "no-preload-247751" | sudo tee /etc/hostname
	I0229 02:30:30.740902  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-247751
	
	I0229 02:30:30.740952  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.743879  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.744353  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.744396  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.744584  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.744843  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.745014  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.745197  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.745351  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:30.745525  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:30.745543  369591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-247751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-247751/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-247751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:30:30.867175  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:30.867209  369591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:30:30.867229  369591 buildroot.go:174] setting up certificates
	I0229 02:30:30.867240  369591 provision.go:83] configureAuth start
	I0229 02:30:30.867248  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.867521  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:30.870143  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.870443  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.870464  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.870678  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.872992  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.873434  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.873463  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.873643  369591 provision.go:138] copyHostCerts
	I0229 02:30:30.873713  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:30:30.873740  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:30:30.873830  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:30:30.873937  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:30:30.873948  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:30:30.873992  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:30:30.874070  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:30:30.874080  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:30:30.874110  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:30:30.874240  369591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.no-preload-247751 san=[192.168.72.114 192.168.72.114 localhost 127.0.0.1 minikube no-preload-247751]
	I0229 02:30:30.921711  369591 provision.go:172] copyRemoteCerts
	I0229 02:30:30.921769  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:30:30.921793  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.924128  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.924436  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.924474  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.924628  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.924815  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.924975  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.925073  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.009229  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:30:31.035962  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:30:31.062947  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:30:31.089920  369591 provision.go:86] duration metric: configureAuth took 222.667724ms
	I0229 02:30:31.089947  369591 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:30:31.090145  369591 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:30:31.090256  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.092831  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.093148  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.093192  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.093338  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.093511  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.093699  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.093864  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.094032  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:31.094196  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:31.094211  369591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:30:31.381995  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:30:31.382023  369591 machine.go:91] provisioned docker machine in 767.136363ms
	I0229 02:30:31.382036  369591 start.go:300] post-start starting for "no-preload-247751" (driver="kvm2")
	I0229 02:30:31.382049  369591 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:30:31.382066  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.382560  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:30:31.382596  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.385219  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.385574  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.385602  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.385742  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.385955  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.386091  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.386254  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.469621  369591 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:30:31.474615  369591 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:30:31.474640  369591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:30:31.474702  369591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:30:31.474772  369591 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:30:31.474867  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:30:31.484964  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:31.512459  369591 start.go:303] post-start completed in 130.406384ms
	I0229 02:30:31.512519  369591 fix.go:56] fixHost completed within 19.27376704s
	I0229 02:30:31.512569  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.515169  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.515568  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.515596  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.515717  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.515944  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.516108  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.516260  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.516417  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:31.516592  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:31.516605  369591 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:30:31.627335  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173831.594794890
	
	I0229 02:30:31.627357  369591 fix.go:206] guest clock: 1709173831.594794890
	I0229 02:30:31.627366  369591 fix.go:219] Guest: 2024-02-29 02:30:31.59479489 +0000 UTC Remote: 2024-02-29 02:30:31.512545974 +0000 UTC m=+292.733991044 (delta=82.248916ms)
	I0229 02:30:31.627395  369591 fix.go:190] guest clock delta is within tolerance: 82.248916ms
	I0229 02:30:31.627403  369591 start.go:83] releasing machines lock for "no-preload-247751", held for 19.38873796s
	I0229 02:30:31.627429  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.627713  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:31.630486  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.630930  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.630959  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.631131  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631640  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631830  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631920  369591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:30:31.631983  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.632122  369591 ssh_runner.go:195] Run: cat /version.json
	I0229 02:30:31.632160  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.634658  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.634874  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635050  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.635079  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635348  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.635354  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.635379  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635478  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.635566  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.635633  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.635758  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.635768  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.635934  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.635940  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.719735  369591 ssh_runner.go:195] Run: systemctl --version
	I0229 02:30:31.739831  369591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:30:31.891138  369591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:30:31.899497  369591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:30:31.899569  369591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:30:31.921755  369591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:30:31.921785  369591 start.go:475] detecting cgroup driver to use...
	I0229 02:30:31.921896  369591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:30:31.938157  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:30:31.952761  369591 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:30:31.952834  369591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:30:31.966785  369591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:30:31.980931  369591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:30:32.091879  369591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:30:32.261190  369591 docker.go:233] disabling docker service ...
	I0229 02:30:32.261272  369591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:30:32.278862  369591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:30:32.295382  369591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:30:32.433426  369591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:30:32.557975  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:30:32.573791  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:30:32.595797  369591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:30:32.595848  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.608978  369591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:30:32.609042  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.621681  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.634251  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.647107  369591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:30:32.660478  369591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:30:32.672596  369591 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:30:32.672662  369591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:30:32.688480  369591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:30:32.700769  369591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:30:32.823703  369591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:30:33.004444  369591 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:30:33.004531  369591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:30:33.010801  369591 start.go:543] Will wait 60s for crictl version
	I0229 02:30:33.010862  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.015224  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:30:33.064627  369591 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:30:33.064721  369591 ssh_runner.go:195] Run: crio --version
	I0229 02:30:33.108265  369591 ssh_runner.go:195] Run: crio --version
	I0229 02:30:33.142639  369591 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 02:30:33.144169  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:33.147250  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:33.147609  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:33.147644  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:33.147836  369591 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 02:30:33.153138  369591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:33.169427  369591 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:30:33.169481  369591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:33.214079  369591 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 02:30:33.214113  369591 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:30:33.214193  369591 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:33.214216  369591 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.214252  369591 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.214276  369591 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.214335  369591 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.214323  369591 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.214354  369591 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0229 02:30:33.214241  369591 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.215862  369591 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.215880  369591 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0229 02:30:33.215862  369591 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.215928  369591 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.215947  369591 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:33.216082  369591 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.216136  369591 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.216252  369591 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.348095  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0229 02:30:33.434211  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.496911  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.499249  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.503235  369591 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0229 02:30:33.503274  369591 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.503307  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.507506  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.548265  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.551287  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.589427  369591 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0229 02:30:33.589474  369591 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.589523  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.590660  369591 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0229 02:30:33.590688  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.590708  369591 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.590763  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.636886  369591 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0229 02:30:33.636934  369591 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.637001  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.664221  369591 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0229 02:30:33.664266  369591 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.664316  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.691890  369591 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0229 02:30:33.691945  369591 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.691978  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.691993  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.692003  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.692096  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.692107  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.692104  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.692165  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.793616  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:33.793708  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.793723  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:33.793772  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:33.793839  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0229 02:30:33.793853  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:33.793856  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0229 02:30:33.793884  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0229 02:30:33.793902  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.793910  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:33.793914  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:33.793936  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:31.652037  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Start
	I0229 02:30:31.652202  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring networks are active...
	I0229 02:30:31.652984  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring network default is active
	I0229 02:30:31.653457  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring network mk-default-k8s-diff-port-071485 is active
	I0229 02:30:31.653909  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Getting domain xml...
	I0229 02:30:31.654724  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Creating domain...
	I0229 02:30:32.911561  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting to get IP...
	I0229 02:30:32.912505  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:32.912932  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:32.913032  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:32.912928  370716 retry.go:31] will retry after 285.213813ms: waiting for machine to come up
	I0229 02:30:33.199327  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.199733  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.199764  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.199678  370716 retry.go:31] will retry after 334.890426ms: waiting for machine to come up
	I0229 02:30:33.536492  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.536976  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.537006  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.536924  370716 retry.go:31] will retry after 344.946846ms: waiting for machine to come up
	I0229 02:30:33.883432  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.883911  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.883941  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.883858  370716 retry.go:31] will retry after 516.135135ms: waiting for machine to come up
	I0229 02:30:34.401167  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.401592  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.401621  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:34.401543  370716 retry.go:31] will retry after 538.013174ms: waiting for machine to come up
	I0229 02:30:34.941529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.942080  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.942116  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:34.942039  370716 retry.go:31] will retry after 883.013858ms: waiting for machine to come up
	I0229 02:30:33.850786  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0229 02:30:33.850868  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0229 02:30:33.850977  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:34.154343  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:36.987957  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (3.194013383s)
	I0229 02:30:36.987999  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0229 02:30:36.988100  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.194139784s)
	I0229 02:30:36.988127  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0229 02:30:36.988148  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.194207246s)
	I0229 02:30:36.988178  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0229 02:30:36.988156  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:36.988191  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.194323563s)
	I0229 02:30:36.988206  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0229 02:30:36.988236  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:36.988269  369591 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.833890629s)
	I0229 02:30:36.988240  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.13724749s)
	I0229 02:30:36.988310  369591 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0229 02:30:36.988331  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0229 02:30:36.988343  369591 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:36.988375  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:36.993483  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:38.351556  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.363290185s)
	I0229 02:30:38.351599  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0229 02:30:38.351633  369591 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:38.351632  369591 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.358113254s)
	I0229 02:30:38.351686  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0229 02:30:38.351705  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:38.351782  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:35.827402  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:35.827906  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:35.827932  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:35.827872  370716 retry.go:31] will retry after 902.653821ms: waiting for machine to come up
	I0229 02:30:36.732470  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:36.732925  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:36.732957  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:36.732863  370716 retry.go:31] will retry after 1.322376383s: waiting for machine to come up
	I0229 02:30:38.057306  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:38.057842  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:38.057874  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:38.057790  370716 retry.go:31] will retry after 1.16249498s: waiting for machine to come up
	I0229 02:30:39.221714  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:39.222197  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:39.222236  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:39.222156  370716 retry.go:31] will retry after 1.912383064s: waiting for machine to come up
	I0229 02:30:42.350149  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.998331984s)
	I0229 02:30:42.350198  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0229 02:30:42.350214  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.99848453s)
	I0229 02:30:42.350266  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0229 02:30:42.350305  369591 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:42.350357  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:41.135736  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:41.136113  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:41.136144  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:41.136058  370716 retry.go:31] will retry after 2.823296742s: waiting for machine to come up
	I0229 02:30:43.960885  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:43.961677  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:43.961703  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:43.961582  370716 retry.go:31] will retry after 3.266272258s: waiting for machine to come up
	I0229 02:30:44.528869  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.178478896s)
	I0229 02:30:44.528915  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0229 02:30:44.528947  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:44.529014  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:46.991074  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462030604s)
	I0229 02:30:46.991103  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0229 02:30:46.991129  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:46.991195  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:47.229005  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:47.229478  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:47.229511  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:47.229417  370716 retry.go:31] will retry after 3.429712893s: waiting for machine to come up
	I0229 02:30:51.887858  370051 start.go:369] acquired machines lock for "old-k8s-version-275488" in 4m15.644916266s
	I0229 02:30:51.887935  370051 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:51.887944  370051 fix.go:54] fixHost starting: 
	I0229 02:30:51.888374  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:51.888428  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:51.905851  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0229 02:30:51.906292  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:51.906778  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:30:51.906806  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:51.907250  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:51.907459  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:30:51.907631  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetState
	I0229 02:30:51.909061  370051 fix.go:102] recreateIfNeeded on old-k8s-version-275488: state=Stopped err=<nil>
	I0229 02:30:51.909093  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	W0229 02:30:51.909251  370051 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:51.911318  370051 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275488" ...
	I0229 02:30:50.662939  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.663341  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Found IP for machine: 192.168.61.233
	I0229 02:30:50.663366  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Reserving static IP address...
	I0229 02:30:50.663404  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has current primary IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.663745  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-071485", mac: "52:54:00:81:f9:08", ip: "192.168.61.233"} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.663781  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Reserved static IP address: 192.168.61.233
	I0229 02:30:50.663804  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | skip adding static IP to network mk-default-k8s-diff-port-071485 - found existing host DHCP lease matching {name: "default-k8s-diff-port-071485", mac: "52:54:00:81:f9:08", ip: "192.168.61.233"}
	I0229 02:30:50.663819  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for SSH to be available...
	I0229 02:30:50.663830  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Getting to WaitForSSH function...
	I0229 02:30:50.665924  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.666270  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.666306  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.666411  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Using SSH client type: external
	I0229 02:30:50.666435  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa (-rw-------)
	I0229 02:30:50.666464  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:30:50.666477  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | About to run SSH command:
	I0229 02:30:50.666489  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | exit 0
	I0229 02:30:50.794598  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | SSH cmd err, output: <nil>: 
	I0229 02:30:50.795011  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetConfigRaw
	I0229 02:30:50.795753  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:50.798443  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.798796  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.798822  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.799151  369869 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/config.json ...
	I0229 02:30:50.799410  369869 machine.go:88] provisioning docker machine ...
	I0229 02:30:50.799440  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:50.799684  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:50.799937  369869 buildroot.go:166] provisioning hostname "default-k8s-diff-port-071485"
	I0229 02:30:50.799963  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:50.800129  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:50.802457  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.802786  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.802813  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.802923  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:50.803087  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.803281  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.803393  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:50.803527  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:50.803744  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:50.803757  369869 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-071485 && echo "default-k8s-diff-port-071485" | sudo tee /etc/hostname
	I0229 02:30:50.930812  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-071485
	
	I0229 02:30:50.930849  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:50.933650  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.934017  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.934057  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.934217  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:50.934458  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.934651  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.934813  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:50.934964  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:50.935141  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:50.935159  369869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-071485' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-071485/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-071485' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:30:51.057233  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:51.057266  369869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:30:51.057307  369869 buildroot.go:174] setting up certificates
	I0229 02:30:51.057321  369869 provision.go:83] configureAuth start
	I0229 02:30:51.057335  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:51.057615  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:51.060233  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.060563  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.060595  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.060707  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.062583  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.062889  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.062938  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.063065  369869 provision.go:138] copyHostCerts
	I0229 02:30:51.063121  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:30:51.063140  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:30:51.063193  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:30:51.063290  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:30:51.063304  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:30:51.063332  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:30:51.063396  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:30:51.063403  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:30:51.063420  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:30:51.063482  369869 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-071485 san=[192.168.61.233 192.168.61.233 localhost 127.0.0.1 minikube default-k8s-diff-port-071485]
	I0229 02:30:51.180356  369869 provision.go:172] copyRemoteCerts
	I0229 02:30:51.180417  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:30:51.180446  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.182981  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.183262  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.183295  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.183465  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.183656  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.183814  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.183958  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.270548  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:30:51.297136  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 02:30:51.323133  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:30:51.349241  369869 provision.go:86] duration metric: configureAuth took 291.905825ms
	I0229 02:30:51.349269  369869 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:30:51.349453  369869 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:30:51.349529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.352119  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.352473  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.352503  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.352658  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.352839  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.353009  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.353122  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.353304  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:51.353480  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:51.353495  369869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:30:51.639987  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:30:51.640022  369869 machine.go:91] provisioned docker machine in 840.591751ms
	I0229 02:30:51.640041  369869 start.go:300] post-start starting for "default-k8s-diff-port-071485" (driver="kvm2")
	I0229 02:30:51.640057  369869 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:30:51.640087  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.640450  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:30:51.640486  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.643118  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.643427  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.643464  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.643661  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.643871  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.644025  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.644164  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.730150  369869 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:30:51.735109  369869 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:30:51.735135  369869 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:30:51.735207  369869 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:30:51.735298  369869 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:30:51.735416  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:30:51.745416  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:51.771727  369869 start.go:303] post-start completed in 131.66845ms
	I0229 02:30:51.771756  369869 fix.go:56] fixHost completed within 20.144195498s
	I0229 02:30:51.771782  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.774300  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.774582  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.774610  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.774744  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.774972  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.775153  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.775295  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.775481  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:51.775648  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:51.775659  369869 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:30:51.887656  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173851.865903243
	
	I0229 02:30:51.887683  369869 fix.go:206] guest clock: 1709173851.865903243
	I0229 02:30:51.887691  369869 fix.go:219] Guest: 2024-02-29 02:30:51.865903243 +0000 UTC Remote: 2024-02-29 02:30:51.771760886 +0000 UTC m=+266.432013426 (delta=94.142357ms)
	I0229 02:30:51.887738  369869 fix.go:190] guest clock delta is within tolerance: 94.142357ms
	I0229 02:30:51.887744  369869 start.go:83] releasing machines lock for "default-k8s-diff-port-071485", held for 20.260217484s
	I0229 02:30:51.887771  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.888047  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:51.890930  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.891264  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.891294  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.891491  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892002  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892209  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892299  369869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:30:51.892370  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.892472  369869 ssh_runner.go:195] Run: cat /version.json
	I0229 02:30:51.892503  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.895178  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895415  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895591  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.895626  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895769  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.895800  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895820  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.895966  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.896055  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.896141  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.896212  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.896277  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.896367  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.896447  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.976085  369869 ssh_runner.go:195] Run: systemctl --version
	I0229 02:30:52.001946  369869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:30:52.156753  369869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:30:52.164196  369869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:30:52.164302  369869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:30:52.189176  369869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:30:52.189201  369869 start.go:475] detecting cgroup driver to use...
	I0229 02:30:52.189281  369869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:30:52.207647  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:30:52.223752  369869 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:30:52.223842  369869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:30:52.246026  369869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:30:52.262180  369869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:30:52.409077  369869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:30:52.583777  369869 docker.go:233] disabling docker service ...
	I0229 02:30:52.583850  369869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:30:52.601434  369869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:30:52.617382  369869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:30:52.757258  369869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:30:52.898036  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:30:52.915787  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:30:52.939344  369869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:30:52.939417  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.951659  369869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:30:52.951722  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.963072  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.974800  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.986490  369869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:30:52.998630  369869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:30:53.009783  369869 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:30:53.009862  369869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:30:53.026356  369869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:30:53.038720  369869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:30:53.171220  369869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:30:53.326032  369869 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:30:53.326102  369869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:30:53.332369  369869 start.go:543] Will wait 60s for crictl version
	I0229 02:30:53.332431  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:30:53.336784  369869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:30:53.378780  369869 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:30:53.378902  369869 ssh_runner.go:195] Run: crio --version
	I0229 02:30:53.411158  369869 ssh_runner.go:195] Run: crio --version
	I0229 02:30:53.447038  369869 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 02:30:49.053324  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.062103665s)
	I0229 02:30:49.053353  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0229 02:30:49.053378  369591 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:49.053426  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:49.910791  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0229 02:30:49.910854  369591 cache_images.go:123] Successfully loaded all cached images
	I0229 02:30:49.910862  369591 cache_images.go:92] LoadImages completed in 16.696734078s
	I0229 02:30:49.910994  369591 ssh_runner.go:195] Run: crio config
	I0229 02:30:49.961413  369591 cni.go:84] Creating CNI manager for ""
	I0229 02:30:49.961435  369591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:30:49.961456  369591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:30:49.961509  369591 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.114 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-247751 NodeName:no-preload-247751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:30:49.961701  369591 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-247751"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:30:49.961801  369591 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-247751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:30:49.961866  369591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 02:30:49.973105  369591 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:30:49.973170  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:30:49.983178  369591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0229 02:30:50.001511  369591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 02:30:50.019574  369591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0229 02:30:50.037993  369591 ssh_runner.go:195] Run: grep 192.168.72.114	control-plane.minikube.internal$ /etc/hosts
	I0229 02:30:50.042075  369591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:50.054761  369591 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751 for IP: 192.168.72.114
	I0229 02:30:50.054796  369591 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:30:50.054976  369591 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:30:50.055031  369591 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:30:50.055146  369591 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/client.key
	I0229 02:30:50.055243  369591 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.key.9adeb8c5
	I0229 02:30:50.055310  369591 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.key
	I0229 02:30:50.055440  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:30:50.055481  369591 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:30:50.055502  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:30:50.055542  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:30:50.055577  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:30:50.055658  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:30:50.055728  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:50.056454  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:30:50.083764  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:30:50.110733  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:30:50.139180  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:30:50.167000  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:30:50.194044  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:30:50.220671  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:30:50.247561  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:30:50.274577  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:30:50.300997  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:30:50.327718  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:30:50.355463  369591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:30:50.374921  369591 ssh_runner.go:195] Run: openssl version
	I0229 02:30:50.381614  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:30:50.393546  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.398532  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.398594  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.404719  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:30:50.416507  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:30:50.428072  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.433031  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.433106  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.439174  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:30:50.450778  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:30:50.462238  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.467219  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.467269  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.473395  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:30:50.484817  369591 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:30:50.489643  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:30:50.496274  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:30:50.502579  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:30:50.508665  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:30:50.514827  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:30:50.520958  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:30:50.527032  369591 kubeadm.go:404] StartCluster: {Name:no-preload-247751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-247751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:30:50.527147  369591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:30:50.527194  369591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:30:50.565834  369591 cri.go:89] found id: ""
	I0229 02:30:50.565931  369591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:30:50.577305  369591 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:30:50.577354  369591 kubeadm.go:636] restartCluster start
	I0229 02:30:50.577408  369591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:30:50.587881  369591 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:50.588896  369591 kubeconfig.go:92] found "no-preload-247751" server: "https://192.168.72.114:8443"
	I0229 02:30:50.591223  369591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:30:50.601374  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:50.601434  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:50.613730  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:51.102422  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:51.102539  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:51.116483  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:51.601564  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:51.601657  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:51.615481  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:52.102039  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:52.102123  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:52.121300  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:52.601999  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:52.602093  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:52.618701  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.102291  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:53.102403  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:53.117898  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.602410  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:53.602496  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:53.618760  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.448437  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:53.451649  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:53.451998  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:53.452052  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:53.452302  369869 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 02:30:53.458709  369869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:53.477744  369869 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:30:53.477831  369869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:53.527511  369869 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 02:30:53.527593  369869 ssh_runner.go:195] Run: which lz4
	I0229 02:30:53.532370  369869 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:30:53.537149  369869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:30:53.537179  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 02:30:51.912520  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .Start
	I0229 02:30:51.912688  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring networks are active...
	I0229 02:30:51.913511  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network default is active
	I0229 02:30:51.913929  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network mk-old-k8s-version-275488 is active
	I0229 02:30:51.914378  370051 main.go:141] libmachine: (old-k8s-version-275488) Getting domain xml...
	I0229 02:30:51.915191  370051 main.go:141] libmachine: (old-k8s-version-275488) Creating domain...
	I0229 02:30:53.179261  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting to get IP...
	I0229 02:30:53.180359  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.180800  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.180922  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.180789  370858 retry.go:31] will retry after 282.360524ms: waiting for machine to come up
	I0229 02:30:53.465135  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.465708  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.465742  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.465651  370858 retry.go:31] will retry after 341.876004ms: waiting for machine to come up
	I0229 02:30:53.809322  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.809734  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.809876  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.809797  370858 retry.go:31] will retry after 356.208548ms: waiting for machine to come up
	I0229 02:30:54.167329  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.167824  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.167852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.167760  370858 retry.go:31] will retry after 395.76503ms: waiting for machine to come up
	I0229 02:30:54.565496  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.565976  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.566004  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.565933  370858 retry.go:31] will retry after 617.898012ms: waiting for machine to come up
	I0229 02:30:55.185679  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:55.186193  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:55.186237  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:55.186144  370858 retry.go:31] will retry after 911.947678ms: waiting for machine to come up
	I0229 02:30:56.099334  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:56.099788  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:56.099815  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:56.099726  370858 retry.go:31] will retry after 1.132066509s: waiting for machine to come up
	I0229 02:30:54.102304  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:54.102485  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:54.123193  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:54.601763  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:54.601890  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:54.621846  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.102417  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:55.102503  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:55.129010  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.601478  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:55.601532  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:55.620169  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:56.101701  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:56.101776  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:56.121369  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:56.601447  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:56.601550  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:56.617079  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.101509  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:57.101648  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:57.121691  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.601658  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:57.601754  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:57.620357  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:58.101829  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:58.101921  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:58.115818  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:58.602403  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:58.602509  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:58.621857  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.599398  369869 crio.go:444] Took 2.067052 seconds to copy over tarball
	I0229 02:30:55.599501  369869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:30:58.543850  369869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944309258s)
	I0229 02:30:58.543884  369869 crio.go:451] Took 2.944447 seconds to extract the tarball
	I0229 02:30:58.543896  369869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:30:58.592492  369869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:58.751479  369869 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:30:58.751509  369869 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:30:58.751576  369869 ssh_runner.go:195] Run: crio config
	I0229 02:30:58.813487  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:30:58.813515  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:30:58.813540  369869 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:30:58.813566  369869 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.233 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-071485 NodeName:default-k8s-diff-port-071485 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:30:58.813785  369869 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.233
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-071485"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:30:58.813898  369869 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-071485 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-071485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0229 02:30:58.813971  369869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:30:58.826199  369869 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:30:58.826324  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:30:58.837384  369869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0229 02:30:58.856023  369869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:30:58.876432  369869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0229 02:30:58.900684  369869 ssh_runner.go:195] Run: grep 192.168.61.233	control-plane.minikube.internal$ /etc/hosts
	I0229 02:30:58.905249  369869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:58.920007  369869 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485 for IP: 192.168.61.233
	I0229 02:30:58.920046  369869 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:30:58.920249  369869 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:30:58.920319  369869 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:30:58.920432  369869 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/client.key
	I0229 02:30:58.995037  369869 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.key.b3fc8ab0
	I0229 02:30:58.995173  369869 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.key
	I0229 02:30:58.995377  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:30:58.995430  369869 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:30:58.995451  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:30:58.995503  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:30:58.995543  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:30:58.995590  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:30:58.995653  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:58.996607  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:30:59.026487  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:30:59.054725  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:30:59.082553  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:30:59.110374  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:30:59.141972  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:30:59.170097  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:30:59.201206  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:30:59.232790  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:30:59.263940  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:30:59.292401  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:30:59.321920  369869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:30:59.343921  369869 ssh_runner.go:195] Run: openssl version
	I0229 02:30:59.351308  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:30:59.364059  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.369212  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.369302  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.375683  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:30:59.389046  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:30:59.404101  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.409433  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.409491  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.416126  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:30:59.429674  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:30:59.443405  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.448931  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.448991  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.455800  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:30:59.469013  369869 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:30:59.474745  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:30:59.481689  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:30:59.488868  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:30:59.496380  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:30:59.503593  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:30:59.510485  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:30:59.517770  369869 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-071485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-071485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.233 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:30:59.517894  369869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:30:59.517941  369869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:30:59.564631  369869 cri.go:89] found id: ""
	I0229 02:30:59.564718  369869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:30:59.578812  369869 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:30:59.578881  369869 kubeadm.go:636] restartCluster start
	I0229 02:30:59.578954  369869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:30:59.592900  369869 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:59.593909  369869 kubeconfig.go:92] found "default-k8s-diff-port-071485" server: "https://192.168.61.233:8444"
	I0229 02:30:59.596083  369869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:30:59.609384  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.609466  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.625617  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.110139  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.110282  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.127301  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.233610  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:57.234113  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:57.234145  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:57.234063  370858 retry.go:31] will retry after 1.238348525s: waiting for machine to come up
	I0229 02:30:58.474146  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:58.474696  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:58.474733  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:58.474642  370858 retry.go:31] will retry after 1.373712981s: waiting for machine to come up
	I0229 02:30:59.850075  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:59.850504  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:59.850526  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:59.850460  370858 retry.go:31] will retry after 2.156069813s: waiting for machine to come up
	I0229 02:30:59.101727  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.101812  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.120465  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:59.602060  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.602155  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.620588  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.102108  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.102203  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.120822  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.602443  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.602545  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.616796  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.616835  369591 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:00.616858  369591 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:00.616873  369591 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:00.616940  369591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:00.661747  369591 cri.go:89] found id: ""
	I0229 02:31:00.661869  369591 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:00.684098  369591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:00.696989  369591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:00.697059  369591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:00.708553  369591 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:00.708583  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:00.827929  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.578572  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.818119  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.892891  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.964926  369591 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:01.965037  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:02.466098  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:02.965290  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:03.465897  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:03.483060  369591 api_server.go:72] duration metric: took 1.518135432s to wait for apiserver process to appear ...
	I0229 02:31:03.483103  369591 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:03.483127  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:00.610179  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.610299  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.630460  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:01.109543  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:01.109680  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:01.129578  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:01.610203  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:01.610301  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:01.630078  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.109835  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:02.109945  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:02.127400  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.610160  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:02.610269  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:02.630581  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:03.109702  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:03.109836  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:03.129754  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:03.610303  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:03.610389  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:03.629702  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:04.110325  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:04.110459  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:04.128740  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:04.610305  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:04.610403  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:04.624716  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:05.110349  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:05.110457  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:05.130070  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.007911  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:02.008381  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:02.008409  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:02.008330  370858 retry.go:31] will retry after 1.864134048s: waiting for machine to come up
	I0229 02:31:03.873997  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:03.874606  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:03.874653  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:03.874547  370858 retry.go:31] will retry after 2.45659808s: waiting for machine to come up
	I0229 02:31:06.111554  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:06.111581  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:06.111596  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.191055  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:06.191090  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:06.483401  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.489220  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:06.489254  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:06.983921  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.988354  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:06.988430  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:07.483305  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:07.489830  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0229 02:31:07.497146  369591 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:31:07.497187  369591 api_server.go:131] duration metric: took 4.014075718s to wait for apiserver health ...
	I0229 02:31:07.497201  369591 cni.go:84] Creating CNI manager for ""
	I0229 02:31:07.497210  369591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:07.498785  369591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:07.500032  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:31:07.530625  369591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:31:07.594249  369591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:31:07.604940  369591 system_pods.go:59] 8 kube-system pods found
	I0229 02:31:07.604973  369591 system_pods.go:61] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:31:07.604980  369591 system_pods.go:61] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:31:07.604989  369591 system_pods.go:61] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:31:07.604995  369591 system_pods.go:61] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:31:07.605003  369591 system_pods.go:61] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:31:07.605015  369591 system_pods.go:61] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:31:07.605022  369591 system_pods.go:61] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:31:07.605032  369591 system_pods.go:61] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:31:07.605052  369591 system_pods.go:74] duration metric: took 10.776743ms to wait for pod list to return data ...
	I0229 02:31:07.605061  369591 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:31:07.608034  369591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:31:07.608059  369591 node_conditions.go:123] node cpu capacity is 2
	I0229 02:31:07.608073  369591 node_conditions.go:105] duration metric: took 3.004467ms to run NodePressure ...
	I0229 02:31:07.608096  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:07.975871  369591 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:31:07.980949  369591 kubeadm.go:787] kubelet initialised
	I0229 02:31:07.980970  369591 kubeadm.go:788] duration metric: took 5.071971ms waiting for restarted kubelet to initialise ...
	I0229 02:31:07.980979  369591 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:07.986764  369591 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:07.992673  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "coredns-76f75df574-2z5w8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.992698  369591 pod_ready.go:81] duration metric: took 5.911106ms waiting for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:07.992707  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "coredns-76f75df574-2z5w8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.992717  369591 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:07.997300  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "etcd-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.997322  369591 pod_ready.go:81] duration metric: took 4.594827ms waiting for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:07.997330  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "etcd-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.997335  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.004032  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-apiserver-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.004052  369591 pod_ready.go:81] duration metric: took 6.71117ms waiting for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.004060  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-apiserver-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.004066  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.009947  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.009985  369591 pod_ready.go:81] duration metric: took 5.909502ms waiting for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.010001  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.010009  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.398938  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-proxy-cdc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.398965  369591 pod_ready.go:81] duration metric: took 388.944943ms waiting for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.398975  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-proxy-cdc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.398982  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.797706  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-scheduler-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.797733  369591 pod_ready.go:81] duration metric: took 398.745142ms waiting for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.797744  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-scheduler-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.797751  369591 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:09.198467  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:09.198496  369591 pod_ready.go:81] duration metric: took 400.737315ms waiting for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:09.198506  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:09.198511  369591 pod_ready.go:38] duration metric: took 1.217523271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:09.198530  369591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:31:09.211194  369591 ops.go:34] apiserver oom_adj: -16
	I0229 02:31:09.211222  369591 kubeadm.go:640] restartCluster took 18.633858862s
	I0229 02:31:09.211232  369591 kubeadm.go:406] StartCluster complete in 18.684207766s
	I0229 02:31:09.211263  369591 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:09.211346  369591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:31:09.212899  369591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:09.213213  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:31:09.213318  369591 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:31:09.213406  369591 addons.go:69] Setting storage-provisioner=true in profile "no-preload-247751"
	I0229 02:31:09.213426  369591 addons.go:69] Setting default-storageclass=true in profile "no-preload-247751"
	I0229 02:31:09.213446  369591 addons.go:69] Setting metrics-server=true in profile "no-preload-247751"
	I0229 02:31:09.213463  369591 addons.go:234] Setting addon metrics-server=true in "no-preload-247751"
	I0229 02:31:09.213465  369591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-247751"
	I0229 02:31:09.213463  369591 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0229 02:31:09.213472  369591 addons.go:243] addon metrics-server should already be in state true
	I0229 02:31:09.213435  369591 addons.go:234] Setting addon storage-provisioner=true in "no-preload-247751"
	W0229 02:31:09.213515  369591 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:31:09.213519  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.213541  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.213915  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213924  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213944  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.213944  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.213943  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213978  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.218976  369591 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-247751" context rescaled to 1 replicas
	I0229 02:31:09.219015  369591 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:31:09.220657  369591 out.go:177] * Verifying Kubernetes components...
	I0229 02:31:09.221954  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:31:09.230064  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0229 02:31:09.230528  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.231030  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.231053  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.231526  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.231762  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.233032  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0229 02:31:09.233487  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.233929  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I0229 02:31:09.234003  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.234028  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.234293  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.234406  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.234784  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.234811  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.235009  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.235068  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.235163  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.235631  369591 addons.go:234] Setting addon default-storageclass=true in "no-preload-247751"
	W0229 02:31:09.235651  369591 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:31:09.235679  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.235738  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.235772  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.236123  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.236157  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.250756  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0229 02:31:09.251190  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.251830  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.251855  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.252228  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.252403  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.254210  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.256240  369591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:09.257522  369591 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:31:09.257537  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:31:09.257552  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.255418  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0229 02:31:09.255485  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0229 02:31:09.258003  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.258129  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.258432  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.258457  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.258664  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.258676  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.258822  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.258983  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.259278  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.259313  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.259533  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.261295  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.261320  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.262706  369591 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:31:05.610163  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:05.610319  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:05.627782  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:06.110424  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:06.110521  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:06.129628  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:06.610193  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:06.610330  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:06.624176  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:07.110249  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:07.110354  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:07.129955  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:07.609462  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:07.609536  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:07.623687  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:08.110263  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:08.110407  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:08.126900  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:08.610447  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:08.610520  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:08.625182  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.109675  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:09.109759  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:09.124637  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.610399  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:09.610520  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:09.630681  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.630715  369869 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:09.630757  369869 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:09.630777  369869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:09.630844  369869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:09.683876  369869 cri.go:89] found id: ""
	I0229 02:31:09.683963  369869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:09.706059  369869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:09.719868  369869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:09.719939  369869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:09.734591  369869 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:09.734622  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:09.862689  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:09.263808  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:31:09.263830  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:31:09.263849  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.261760  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.261947  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.263890  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.264339  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.264522  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.264704  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.266885  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.267339  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.267358  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.267533  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.267649  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.267782  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.267862  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.302813  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I0229 02:31:09.303329  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.303878  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.303909  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.304305  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.304509  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.306147  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.306434  369591 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:31:09.306454  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:31:09.306472  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.309029  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.309345  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.309382  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.309670  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.309872  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.310048  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.310193  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.402579  369591 node_ready.go:35] waiting up to 6m0s for node "no-preload-247751" to be "Ready" ...
	I0229 02:31:09.402756  369591 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:31:09.420259  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:31:09.426629  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:31:09.426655  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:31:09.446028  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:31:09.457219  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:31:09.457244  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:31:09.504028  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:31:09.504054  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:31:09.554137  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:31:10.485560  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.039492326s)
	I0229 02:31:10.485633  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.485646  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.485928  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065634917s)
	I0229 02:31:10.485970  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.485986  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.486053  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.486072  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.486092  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.486104  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.486112  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.486254  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.486287  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.486304  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.486320  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.487538  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.487556  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.487566  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.487543  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.487582  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.487579  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.494355  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.494374  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.494614  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.494635  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.494633  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.559201  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.005004802s)
	I0229 02:31:10.559258  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.559276  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.559592  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.559614  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.559625  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.559633  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.559899  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.559915  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.559926  369591 addons.go:470] Verifying addon metrics-server=true in "no-preload-247751"
	I0229 02:31:10.561833  369591 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:31:06.333259  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:06.333776  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:06.333811  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:06.333733  370858 retry.go:31] will retry after 3.223893936s: waiting for machine to come up
	I0229 02:31:09.559349  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:09.559937  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:09.559968  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:09.559891  370858 retry.go:31] will retry after 5.278822831s: waiting for machine to come up
	I0229 02:31:10.560171  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.563240  369591 addons.go:505] enable addons completed in 1.349905679s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:31:11.408006  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:10.805438  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.016546  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.132323  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.212201  369869 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:11.212309  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:11.713366  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.212866  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.713327  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.732027  369869 api_server.go:72] duration metric: took 1.519826457s to wait for apiserver process to appear ...
	I0229 02:31:12.732056  369869 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:12.732078  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.109299  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:15.109349  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:15.109368  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.166169  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:15.166209  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:15.232359  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.267052  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:15.267099  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.096073  369508 start.go:369] acquired machines lock for "embed-certs-915633" in 58.856797615s
	I0229 02:31:16.096132  369508 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:31:16.096144  369508 fix.go:54] fixHost starting: 
	I0229 02:31:16.096651  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:16.096692  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:16.115912  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0229 02:31:16.116419  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:16.116967  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:31:16.116999  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:16.117362  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:16.117562  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:16.117742  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:31:16.119589  369508 fix.go:102] recreateIfNeeded on embed-certs-915633: state=Stopped err=<nil>
	I0229 02:31:16.119614  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	W0229 02:31:16.119809  369508 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:31:16.121566  369508 out.go:177] * Restarting existing kvm2 VM for "embed-certs-915633" ...
	I0229 02:31:14.842498  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843049  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has current primary IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843083  370051 main.go:141] libmachine: (old-k8s-version-275488) Found IP for machine: 192.168.39.160
	I0229 02:31:14.843112  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserving static IP address...
	I0229 02:31:14.843485  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.843510  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | skip adding static IP to network mk-old-k8s-version-275488 - found existing host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"}
	I0229 02:31:14.843525  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserved static IP address: 192.168.39.160
	I0229 02:31:14.843535  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Getting to WaitForSSH function...
	I0229 02:31:14.843553  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting for SSH to be available...
	I0229 02:31:14.845739  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846087  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.846120  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846289  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH client type: external
	I0229 02:31:14.846319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa (-rw-------)
	I0229 02:31:14.846355  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:31:14.846372  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | About to run SSH command:
	I0229 02:31:14.846390  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | exit 0
	I0229 02:31:14.979384  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | SSH cmd err, output: <nil>: 
	I0229 02:31:14.979896  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetConfigRaw
	I0229 02:31:14.980716  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:14.983852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984278  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.984319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984639  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:31:14.984865  370051 machine.go:88] provisioning docker machine ...
	I0229 02:31:14.984890  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:14.985140  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985324  370051 buildroot.go:166] provisioning hostname "old-k8s-version-275488"
	I0229 02:31:14.985347  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985494  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:14.988036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988438  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.988464  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988633  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:14.988829  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989003  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989174  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:14.989361  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:14.989604  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:14.989621  370051 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275488 && echo "old-k8s-version-275488" | sudo tee /etc/hostname
	I0229 02:31:15.125564  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275488
	
	I0229 02:31:15.125605  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.128963  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129570  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.129652  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129735  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.129996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130185  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130380  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.130616  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.130872  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.130900  370051 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275488/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:15.272298  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:15.272337  370051 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:31:15.272368  370051 buildroot.go:174] setting up certificates
	I0229 02:31:15.272385  370051 provision.go:83] configureAuth start
	I0229 02:31:15.272402  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:15.272772  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:15.276382  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.276838  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.276869  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.277051  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.279927  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280298  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.280326  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280505  370051 provision.go:138] copyHostCerts
	I0229 02:31:15.280555  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:31:15.280566  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:31:15.280619  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:31:15.280749  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:31:15.280764  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:31:15.280789  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:31:15.280857  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:31:15.280871  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:31:15.280891  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:31:15.280954  370051 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275488 san=[192.168.39.160 192.168.39.160 localhost 127.0.0.1 minikube old-k8s-version-275488]
	I0229 02:31:15.360428  370051 provision.go:172] copyRemoteCerts
	I0229 02:31:15.360487  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:31:15.360512  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.363540  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.363931  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.363966  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.364154  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.364337  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.364495  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.364622  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.453643  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:31:15.483233  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 02:31:15.512164  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:31:15.543453  370051 provision.go:86] duration metric: configureAuth took 271.048547ms
	I0229 02:31:15.543484  370051 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:31:15.543705  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:31:15.543816  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.546472  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.546807  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.546835  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.547049  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.547272  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547455  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547662  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.547861  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.548035  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.548052  370051 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:31:15.835533  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:31:15.835572  370051 machine.go:91] provisioned docker machine in 850.691497ms
	I0229 02:31:15.835589  370051 start.go:300] post-start starting for "old-k8s-version-275488" (driver="kvm2")
	I0229 02:31:15.835604  370051 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:31:15.835635  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:15.835995  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:31:15.836025  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.838946  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839297  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.839330  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839460  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.839665  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.839839  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.840008  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.925849  370051 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:31:15.931227  370051 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:31:15.931260  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:31:15.931363  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:31:15.931465  370051 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:31:15.931574  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:31:15.942500  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:15.972803  370051 start.go:303] post-start completed in 137.19736ms
	I0229 02:31:15.972838  370051 fix.go:56] fixHost completed within 24.084893996s
	I0229 02:31:15.972873  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.975698  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976063  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.976093  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976279  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.976518  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976659  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976795  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.976959  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.977119  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.977130  370051 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:31:16.095892  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173876.041987567
	
	I0229 02:31:16.095917  370051 fix.go:206] guest clock: 1709173876.041987567
	I0229 02:31:16.095927  370051 fix.go:219] Guest: 2024-02-29 02:31:16.041987567 +0000 UTC Remote: 2024-02-29 02:31:15.972843681 +0000 UTC m=+279.886639354 (delta=69.143886ms)
	I0229 02:31:16.095954  370051 fix.go:190] guest clock delta is within tolerance: 69.143886ms
	I0229 02:31:16.095962  370051 start.go:83] releasing machines lock for "old-k8s-version-275488", held for 24.208056775s
	I0229 02:31:16.095996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.096336  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:16.099518  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100016  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.100060  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100189  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100751  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100955  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.101035  370051 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:31:16.101084  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.101167  370051 ssh_runner.go:195] Run: cat /version.json
	I0229 02:31:16.101190  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.104588  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.104638  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105000  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105059  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105101  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105335  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105546  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105590  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105821  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105832  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106002  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:16.106028  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106180  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.732828  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.739797  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:15.739827  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.232355  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:16.240421  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:16.240462  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.732451  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:16.740118  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 200:
	ok
	I0229 02:31:16.748529  369869 api_server.go:141] control plane version: v1.28.4
	I0229 02:31:16.748567  369869 api_server.go:131] duration metric: took 4.0165029s to wait for apiserver health ...
	I0229 02:31:16.748580  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:31:16.748588  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:16.750561  369869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:16.194120  370051 ssh_runner.go:195] Run: systemctl --version
	I0229 02:31:16.220808  370051 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:31:16.386082  370051 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:31:16.393419  370051 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:31:16.393512  370051 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:31:16.418966  370051 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:31:16.419003  370051 start.go:475] detecting cgroup driver to use...
	I0229 02:31:16.419087  370051 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:31:16.444372  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:31:16.466354  370051 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:31:16.466430  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:31:16.488710  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:31:16.509561  370051 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:31:16.651716  370051 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:31:16.840453  370051 docker.go:233] disabling docker service ...
	I0229 02:31:16.840538  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:31:16.869611  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:31:16.890123  370051 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:31:17.047701  370051 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:31:17.225457  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:31:17.248553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:31:17.275486  370051 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 02:31:17.275572  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.290350  370051 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:31:17.290437  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.304093  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.320562  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.339790  370051 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:31:17.356570  370051 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:31:17.371208  370051 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:31:17.371303  370051 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:31:17.390748  370051 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:31:17.405750  370051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:31:17.555023  370051 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:31:17.754419  370051 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:31:17.754508  370051 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:31:17.760190  370051 start.go:543] Will wait 60s for crictl version
	I0229 02:31:17.760245  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:17.765195  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:31:17.815839  370051 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:31:17.815953  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.857470  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.896796  370051 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 02:31:13.906892  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:15.907106  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:16.914513  369591 node_ready.go:49] node "no-preload-247751" has status "Ready":"True"
	I0229 02:31:16.914545  369591 node_ready.go:38] duration metric: took 7.511932085s waiting for node "no-preload-247751" to be "Ready" ...
	I0229 02:31:16.914560  369591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:16.925133  369591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.940518  369591 pod_ready.go:92] pod "coredns-76f75df574-2z5w8" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:16.940553  369591 pod_ready.go:81] duration metric: took 15.382701ms waiting for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.940568  369591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.122967  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Start
	I0229 02:31:16.123141  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring networks are active...
	I0229 02:31:16.124019  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring network default is active
	I0229 02:31:16.124630  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring network mk-embed-certs-915633 is active
	I0229 02:31:16.125118  369508 main.go:141] libmachine: (embed-certs-915633) Getting domain xml...
	I0229 02:31:16.126026  369508 main.go:141] libmachine: (embed-certs-915633) Creating domain...
	I0229 02:31:17.664537  369508 main.go:141] libmachine: (embed-certs-915633) Waiting to get IP...
	I0229 02:31:17.665883  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:17.666462  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:17.666595  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:17.666455  371066 retry.go:31] will retry after 193.172159ms: waiting for machine to come up
	I0229 02:31:17.861043  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:17.861754  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:17.861781  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:17.861651  371066 retry.go:31] will retry after 298.133474ms: waiting for machine to come up
	I0229 02:31:18.161304  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:18.161851  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:18.161886  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:18.161818  371066 retry.go:31] will retry after 402.680342ms: waiting for machine to come up
	I0229 02:31:18.566482  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:18.567145  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:18.567165  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:18.567068  371066 retry.go:31] will retry after 536.886613ms: waiting for machine to come up
	I0229 02:31:19.106090  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:19.106797  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:19.106823  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:19.106714  371066 retry.go:31] will retry after 583.032631ms: waiting for machine to come up
	I0229 02:31:19.691531  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:19.692096  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:19.692127  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:19.692000  371066 retry.go:31] will retry after 780.156818ms: waiting for machine to come up
	I0229 02:31:16.752375  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:31:16.783785  369869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:31:16.816646  369869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:31:16.829430  369869 system_pods.go:59] 8 kube-system pods found
	I0229 02:31:16.829480  369869 system_pods.go:61] "coredns-5dd5756b68-652db" [d989183e-dc0d-4913-8eab-fdfac0cf7ad7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:31:16.829491  369869 system_pods.go:61] "etcd-default-k8s-diff-port-071485" [aba29f47-cf0e-4ee5-8d18-7647b36369e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:31:16.829501  369869 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071485" [26a426b2-d5b7-456e-a733-3317009974ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:31:16.829517  369869 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071485" [a896f9fa-991f-44bb-bd97-02fac3494eea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:31:16.829528  369869 system_pods.go:61] "kube-proxy-g976s" [bc750be0-ae2b-4033-b65b-f1cccaebf32f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:31:16.829536  369869 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071485" [d99d25bf-25f4-4057-aedb-fc5ba797af47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:31:16.829544  369869 system_pods.go:61] "metrics-server-57f55c9bc5-86frx" [0ad81c0d-3f9a-45d8-93d8-bbb9e276d5b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:31:16.829560  369869 system_pods.go:61] "storage-provisioner" [92683c3e-04c1-4cef-988d-3b8beb7d4399] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:31:16.829570  369869 system_pods.go:74] duration metric: took 12.896339ms to wait for pod list to return data ...
	I0229 02:31:16.829584  369869 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:31:16.837494  369869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:31:16.837524  369869 node_conditions.go:123] node cpu capacity is 2
	I0229 02:31:16.837535  369869 node_conditions.go:105] duration metric: took 7.942051ms to run NodePressure ...
	I0229 02:31:16.837560  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:17.293873  369869 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:31:17.300874  369869 kubeadm.go:787] kubelet initialised
	I0229 02:31:17.300907  369869 kubeadm.go:788] duration metric: took 7.00259ms waiting for restarted kubelet to initialise ...
	I0229 02:31:17.300919  369869 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:17.315838  369869 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-652db" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.328228  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "coredns-5dd5756b68-652db" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.328265  369869 pod_ready.go:81] duration metric: took 12.396088ms waiting for pod "coredns-5dd5756b68-652db" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.328278  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "coredns-5dd5756b68-652db" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.328287  369869 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.335458  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.335487  369869 pod_ready.go:81] duration metric: took 7.145351ms waiting for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.335497  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.335505  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.356278  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.356365  369869 pod_ready.go:81] duration metric: took 20.849982ms waiting for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.356385  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.356396  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:19.376170  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:17.898162  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:17.901332  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.901809  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:17.901840  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.902046  370051 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:31:17.907256  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:17.924135  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:31:17.924218  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:17.986923  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:17.986992  370051 ssh_runner.go:195] Run: which lz4
	I0229 02:31:17.992110  370051 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:31:17.997252  370051 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:31:17.997287  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 02:31:20.124958  370051 crio.go:444] Took 2.132885 seconds to copy over tarball
	I0229 02:31:20.125075  370051 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:31:18.948383  369591 pod_ready.go:102] pod "etcd-no-preload-247751" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:20.950330  369591 pod_ready.go:92] pod "etcd-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:20.950359  369591 pod_ready.go:81] duration metric: took 4.009782336s waiting for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:20.950372  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.460878  369591 pod_ready.go:92] pod "kube-apiserver-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.460907  369591 pod_ready.go:81] duration metric: took 1.510525429s waiting for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.460922  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.468463  369591 pod_ready.go:92] pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.468487  369591 pod_ready.go:81] duration metric: took 7.556807ms waiting for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.468497  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.476459  369591 pod_ready.go:92] pod "kube-proxy-cdc4l" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.476488  369591 pod_ready.go:81] duration metric: took 7.983254ms waiting for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.476501  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.482564  369591 pod_ready.go:92] pod "kube-scheduler-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.482589  369591 pod_ready.go:81] duration metric: took 6.080532ms waiting for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.482598  369591 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:20.474186  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:20.474741  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:20.474784  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:20.474647  371066 retry.go:31] will retry after 845.550951ms: waiting for machine to come up
	I0229 02:31:21.322246  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:21.323007  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:21.323031  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:21.322935  371066 retry.go:31] will retry after 1.085864892s: waiting for machine to come up
	I0229 02:31:22.410244  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:22.410735  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:22.410766  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:22.410687  371066 retry.go:31] will retry after 1.587558593s: waiting for machine to come up
	I0229 02:31:24.000303  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:24.000914  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:24.000944  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:24.000828  371066 retry.go:31] will retry after 2.058374822s: waiting for machine to come up
	I0229 02:31:21.867552  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:23.972250  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:23.981829  369869 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:23.981860  369869 pod_ready.go:81] duration metric: took 6.625453699s waiting for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.981875  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g976s" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.994568  369869 pod_ready.go:92] pod "kube-proxy-g976s" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:23.994597  369869 pod_ready.go:81] duration metric: took 12.712769ms waiting for pod "kube-proxy-g976s" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.994609  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:24.002085  369869 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:24.002110  369869 pod_ready.go:81] duration metric: took 7.492788ms waiting for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:24.002133  369869 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.625489  370051 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.500380961s)
	I0229 02:31:23.625526  370051 crio.go:451] Took 3.500531 seconds to extract the tarball
	I0229 02:31:23.625536  370051 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:31:23.671458  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:23.714048  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:23.714087  370051 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:31:23.714189  370051 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.714213  370051 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.714309  370051 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 02:31:23.714424  370051 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.714269  370051 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.714461  370051 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.714519  370051 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.714192  370051 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.716086  370051 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.716076  370051 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716088  370051 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.716143  370051 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.716081  370051 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.716275  370051 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 02:31:23.838722  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.844569  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 02:31:23.853089  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.857738  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.864060  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.865519  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.926256  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.997349  370051 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 02:31:23.997401  370051 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.997463  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.010625  370051 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 02:31:24.010674  370051 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 02:31:24.010722  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083140  370051 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 02:31:24.083203  370051 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 02:31:24.083232  370051 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 02:31:24.083247  370051 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.083266  370051 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.083269  370051 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.083308  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083319  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083364  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083166  370051 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 02:31:24.083426  370051 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.083471  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123878  370051 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 02:31:24.123928  370051 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.123972  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123982  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:24.123973  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 02:31:24.124043  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.124051  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.124097  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.124153  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.152226  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.270585  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 02:31:24.305436  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 02:31:24.305532  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 02:31:24.305621  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 02:31:24.305629  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 02:31:24.305799  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 02:31:24.316950  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 02:31:24.635837  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:24.791670  370051 cache_images.go:92] LoadImages completed in 1.077558745s
	W0229 02:31:24.791798  370051 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0229 02:31:24.791902  370051 ssh_runner.go:195] Run: crio config
	I0229 02:31:24.851132  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:31:24.851164  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:24.851189  370051 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:31:24.851213  370051 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275488 NodeName:old-k8s-version-275488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 02:31:24.851423  370051 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-275488
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.160:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:31:24.851524  370051 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275488 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:31:24.851598  370051 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 02:31:24.864237  370051 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:31:24.864330  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:31:24.879552  370051 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 02:31:24.901027  370051 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:31:24.920638  370051 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 02:31:24.941894  370051 ssh_runner.go:195] Run: grep 192.168.39.160	control-plane.minikube.internal$ /etc/hosts
	I0229 02:31:24.947439  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:24.962396  370051 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488 for IP: 192.168.39.160
	I0229 02:31:24.962435  370051 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:24.962621  370051 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:31:24.962673  370051 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:31:24.962781  370051 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/client.key
	I0229 02:31:24.962851  370051 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key.80b25619
	I0229 02:31:24.962919  370051 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key
	I0229 02:31:24.963087  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:31:24.963126  370051 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:31:24.963138  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:31:24.963185  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:31:24.963213  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:31:24.963245  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:31:24.963296  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:24.963980  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:31:24.996049  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:31:25.030503  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:31:25.057695  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:31:25.091982  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:31:25.126636  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:31:25.156613  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:31:25.186480  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:31:25.221012  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:31:25.254122  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:31:25.282646  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:31:25.312624  370051 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:31:25.335020  370051 ssh_runner.go:195] Run: openssl version
	I0229 02:31:25.342920  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:31:25.355808  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361349  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361433  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.368335  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:31:25.380799  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:31:25.393069  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398466  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398539  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.404776  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:31:25.416735  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:31:25.428884  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434503  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434584  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.441187  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:31:25.453174  370051 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:31:25.458712  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:31:25.466032  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:31:25.473895  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:31:25.482948  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:31:25.491808  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:31:25.499003  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:31:25.506691  370051 kubeadm.go:404] StartCluster: {Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:25.506829  370051 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:31:25.506883  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:25.551867  370051 cri.go:89] found id: ""
	I0229 02:31:25.551970  370051 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:31:25.564446  370051 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:31:25.564476  370051 kubeadm.go:636] restartCluster start
	I0229 02:31:25.564545  370051 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:31:25.576275  370051 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:25.577406  370051 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-275488" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:31:25.578043  370051 kubeconfig.go:146] "old-k8s-version-275488" context is missing from /home/jenkins/minikube-integration/18063-316644/kubeconfig - will repair!
	I0229 02:31:25.578979  370051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:25.580805  370051 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:31:25.592154  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:25.592259  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:25.609268  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:26.092701  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.092827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.108636  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:24.491508  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.492827  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:28.496040  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.062093  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:26.062582  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:26.062612  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:26.062525  371066 retry.go:31] will retry after 2.231071357s: waiting for machine to come up
	I0229 02:31:28.295693  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:28.296180  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:28.296214  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:28.296116  371066 retry.go:31] will retry after 2.376277578s: waiting for machine to come up
	I0229 02:31:26.010834  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:28.031628  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.592320  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.592412  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.606907  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.092891  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.093028  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.112353  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.592956  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.593058  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.612315  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.092611  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.092729  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.108095  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.592592  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.592679  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.612145  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.092605  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.092720  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.113807  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.593002  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.593085  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.609337  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.092667  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.092757  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.112800  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.592328  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.592415  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.610909  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:31.092418  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.092529  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.109490  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.990551  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:32.990785  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:30.675432  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:30.675962  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:30.675995  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:30.675901  371066 retry.go:31] will retry after 4.442717853s: waiting for machine to come up
	I0229 02:31:30.511576  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:32.515611  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:35.010325  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:31.593046  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.593128  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.608148  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.092187  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.092299  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.107573  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.593184  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.593312  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.607993  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.092500  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.092603  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.107359  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.592987  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.593101  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.608041  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.092919  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.093023  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.107597  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.593200  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.593295  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.608100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.092589  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.092683  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.107100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.592815  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.592928  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.610879  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.610916  370051 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:35.610930  370051 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:35.610947  370051 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:35.611032  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:35.660059  370051 cri.go:89] found id: ""
	I0229 02:31:35.660146  370051 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:35.682067  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:35.694455  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:35.694542  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707118  370051 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707149  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:35.834811  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:35.123364  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.123906  369508 main.go:141] libmachine: (embed-certs-915633) Found IP for machine: 192.168.50.218
	I0229 02:31:35.123925  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has current primary IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.123931  369508 main.go:141] libmachine: (embed-certs-915633) Reserving static IP address...
	I0229 02:31:35.124398  369508 main.go:141] libmachine: (embed-certs-915633) Reserved static IP address: 192.168.50.218
	I0229 02:31:35.124423  369508 main.go:141] libmachine: (embed-certs-915633) Waiting for SSH to be available...
	I0229 02:31:35.124441  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "embed-certs-915633", mac: "52:54:00:26:ca:ce", ip: "192.168.50.218"} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.124468  369508 main.go:141] libmachine: (embed-certs-915633) DBG | skip adding static IP to network mk-embed-certs-915633 - found existing host DHCP lease matching {name: "embed-certs-915633", mac: "52:54:00:26:ca:ce", ip: "192.168.50.218"}
	I0229 02:31:35.124487  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Getting to WaitForSSH function...
	I0229 02:31:35.126676  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.127004  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.127035  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.127137  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Using SSH client type: external
	I0229 02:31:35.127168  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa (-rw-------)
	I0229 02:31:35.127199  369508 main.go:141] libmachine: (embed-certs-915633) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:31:35.127213  369508 main.go:141] libmachine: (embed-certs-915633) DBG | About to run SSH command:
	I0229 02:31:35.127224  369508 main.go:141] libmachine: (embed-certs-915633) DBG | exit 0
	I0229 02:31:35.251075  369508 main.go:141] libmachine: (embed-certs-915633) DBG | SSH cmd err, output: <nil>: 
	I0229 02:31:35.251474  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetConfigRaw
	I0229 02:31:35.252256  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:35.254934  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.255350  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.255378  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.255676  369508 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/config.json ...
	I0229 02:31:35.255881  369508 machine.go:88] provisioning docker machine ...
	I0229 02:31:35.255905  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:35.256154  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.256344  369508 buildroot.go:166] provisioning hostname "embed-certs-915633"
	I0229 02:31:35.256369  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.256506  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.258794  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.259163  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.259186  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.259337  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.259551  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.259716  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.259875  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.260066  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.260256  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.260269  369508 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-915633 && echo "embed-certs-915633" | sudo tee /etc/hostname
	I0229 02:31:35.383734  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-915633
	
	I0229 02:31:35.383770  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.386559  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.386913  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.386944  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.387121  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.387359  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.387631  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.387815  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.387979  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.388158  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.388175  369508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-915633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-915633/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-915633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:35.521449  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:35.521490  369508 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:31:35.521530  369508 buildroot.go:174] setting up certificates
	I0229 02:31:35.521544  369508 provision.go:83] configureAuth start
	I0229 02:31:35.521573  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.521923  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:35.524829  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.525193  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.525217  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.525348  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.527582  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.527980  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.528012  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.528164  369508 provision.go:138] copyHostCerts
	I0229 02:31:35.528216  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:31:35.528234  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:31:35.528290  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:31:35.528384  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:31:35.528396  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:31:35.528415  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:31:35.528514  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:31:35.528525  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:31:35.528544  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:31:35.528591  369508 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.embed-certs-915633 san=[192.168.50.218 192.168.50.218 localhost 127.0.0.1 minikube embed-certs-915633]
	I0229 02:31:35.778616  369508 provision.go:172] copyRemoteCerts
	I0229 02:31:35.778679  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:31:35.778706  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.782134  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.782605  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.782640  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.782833  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.783103  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.783305  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.783522  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:35.870506  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:31:35.904595  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:31:35.936515  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:31:35.966505  369508 provision.go:86] duration metric: configureAuth took 444.939951ms
	I0229 02:31:35.966539  369508 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:31:35.966725  369508 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:31:35.966831  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.969731  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.970133  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.970176  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.970402  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.970623  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.970788  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.970968  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.971139  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.971382  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.971401  369508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:31:36.262676  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:31:36.262719  369508 machine.go:91] provisioned docker machine in 1.00682197s
	I0229 02:31:36.262731  369508 start.go:300] post-start starting for "embed-certs-915633" (driver="kvm2")
	I0229 02:31:36.262743  369508 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:31:36.262765  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.263140  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:31:36.263179  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.265718  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.266095  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.266126  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.266278  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.266486  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.266658  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.266795  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.359474  369508 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:31:36.365071  369508 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:31:36.365110  369508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:31:36.365202  369508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:31:36.365279  369508 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:31:36.365395  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:31:36.376823  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:36.406525  369508 start.go:303] post-start completed in 143.75518ms
	I0229 02:31:36.406588  369508 fix.go:56] fixHost completed within 20.310442727s
	I0229 02:31:36.406619  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.409415  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.409840  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.409875  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.410009  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.410214  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.410412  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.410567  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.410715  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:36.410936  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:36.410950  369508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:31:36.520508  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173896.494400897
	
	I0229 02:31:36.520543  369508 fix.go:206] guest clock: 1709173896.494400897
	I0229 02:31:36.520555  369508 fix.go:219] Guest: 2024-02-29 02:31:36.494400897 +0000 UTC Remote: 2024-02-29 02:31:36.406594326 +0000 UTC m=+361.755087901 (delta=87.806571ms)
	I0229 02:31:36.520584  369508 fix.go:190] guest clock delta is within tolerance: 87.806571ms
	I0229 02:31:36.520597  369508 start.go:83] releasing machines lock for "embed-certs-915633", held for 20.424490067s
	I0229 02:31:36.520629  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.520949  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:36.523819  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.524146  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.524185  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.524359  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.524912  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.525109  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.525206  369508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:31:36.525251  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.525332  369508 ssh_runner.go:195] Run: cat /version.json
	I0229 02:31:36.525360  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.528265  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528470  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528614  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.528641  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528826  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.528829  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.528855  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.529047  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.529135  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.529253  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.529321  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.529414  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.529478  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.529556  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.611757  369508 ssh_runner.go:195] Run: systemctl --version
	I0229 02:31:36.638875  369508 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:31:36.786219  369508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:31:36.798964  369508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:31:36.799056  369508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:31:36.817942  369508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:31:36.817975  369508 start.go:475] detecting cgroup driver to use...
	I0229 02:31:36.818086  369508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:31:36.837019  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:31:36.855078  369508 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:31:36.855159  369508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:31:36.873444  369508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:31:36.891708  369508 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:31:37.031928  369508 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:31:37.212859  369508 docker.go:233] disabling docker service ...
	I0229 02:31:37.212960  369508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:31:37.235232  369508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:31:37.253901  369508 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:31:37.401366  369508 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:31:37.530791  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:31:37.547864  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:31:37.570344  369508 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:31:37.570416  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.582275  369508 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:31:37.582345  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.593628  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.605168  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.616567  369508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:31:37.628153  369508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:31:37.638579  369508 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:31:37.638640  369508 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:31:37.652738  369508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:31:37.664118  369508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:31:37.785330  369508 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:31:37.933006  369508 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:31:37.933095  369508 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:31:37.938625  369508 start.go:543] Will wait 60s for crictl version
	I0229 02:31:37.938702  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:31:37.943285  369508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:31:37.984992  369508 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:31:37.985105  369508 ssh_runner.go:195] Run: crio --version
	I0229 02:31:38.018467  369508 ssh_runner.go:195] Run: crio --version
	I0229 02:31:38.051472  369508 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 02:31:34.991345  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:36.991987  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:38.052850  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:38.055688  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:38.055970  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:38.056006  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:38.056253  369508 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 02:31:38.060925  369508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:38.076126  369508 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:31:38.076197  369508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:38.116261  369508 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 02:31:38.116372  369508 ssh_runner.go:195] Run: which lz4
	I0229 02:31:38.121080  369508 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:31:38.125711  369508 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:31:38.125755  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 02:31:37.012008  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:39.018348  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:36.790885  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.042778  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.130251  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.215289  370051 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:37.215384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:37.715589  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.715938  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.215781  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.716505  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.216238  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.716182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.992988  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:41.491712  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:43.492458  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:40.139859  369508 crio.go:444] Took 2.018817 seconds to copy over tarball
	I0229 02:31:40.139953  369508 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:31:43.071745  369508 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.931752333s)
	I0229 02:31:43.071797  369508 crio.go:451] Took 2.931905 seconds to extract the tarball
	I0229 02:31:43.071809  369508 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:31:43.118127  369508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:43.171147  369508 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:31:43.171176  369508 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:31:43.171262  369508 ssh_runner.go:195] Run: crio config
	I0229 02:31:43.232177  369508 cni.go:84] Creating CNI manager for ""
	I0229 02:31:43.232203  369508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:43.232229  369508 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:31:43.232247  369508 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-915633 NodeName:embed-certs-915633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:31:43.232419  369508 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-915633"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:31:43.232519  369508 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-915633 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-915633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:31:43.232596  369508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:31:43.244392  369508 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:31:43.244467  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:31:43.256293  369508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0229 02:31:43.275397  369508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:31:43.295494  369508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0229 02:31:43.316812  369508 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I0229 02:31:43.321496  369508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:43.335055  369508 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633 for IP: 192.168.50.218
	I0229 02:31:43.335092  369508 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:43.335270  369508 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:31:43.335316  369508 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:31:43.335388  369508 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/client.key
	I0229 02:31:43.335442  369508 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.key.cc0da009
	I0229 02:31:43.335475  369508 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.key
	I0229 02:31:43.335584  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:31:43.335610  369508 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:31:43.335619  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:31:43.335642  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:31:43.335673  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:31:43.335710  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:31:43.335779  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:43.336455  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:31:43.364985  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:31:43.394189  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:31:43.424515  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:31:43.456589  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:31:43.486396  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:31:43.516931  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:31:43.546421  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:31:43.578923  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:31:43.608333  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:31:43.637196  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:31:43.667522  369508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:31:43.688266  369508 ssh_runner.go:195] Run: openssl version
	I0229 02:31:43.695616  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:31:43.709892  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.715346  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.715426  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.722688  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:31:43.735866  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:31:43.749967  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.757599  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.757671  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.765157  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:31:43.779671  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:31:43.792900  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.798505  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.798576  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.805192  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:31:43.818233  369508 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:31:43.823681  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:31:43.831016  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:31:43.837899  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:31:43.844802  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:31:43.851881  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:31:43.858689  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:31:43.865749  369508 kubeadm.go:404] StartCluster: {Name:embed-certs-915633 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-915633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:43.865852  369508 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:31:43.865925  369508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:43.906012  369508 cri.go:89] found id: ""
	I0229 02:31:43.906116  369508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:31:43.918241  369508 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:31:43.918265  369508 kubeadm.go:636] restartCluster start
	I0229 02:31:43.918349  369508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:31:43.930524  369508 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:43.931550  369508 kubeconfig.go:92] found "embed-certs-915633" server: "https://192.168.50.218:8443"
	I0229 02:31:43.933612  369508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:31:43.944469  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:43.944519  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:43.958194  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:44.444746  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:44.444840  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:44.458567  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:41.510364  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:43.511424  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:41.216236  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:41.716082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.215537  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.715524  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.215873  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.715634  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.216464  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.715519  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.216430  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.716196  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.990995  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:48.489390  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:44.944934  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.003707  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.018797  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:45.445348  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.445435  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.460199  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:45.944750  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.944879  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.959309  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.445218  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:46.445313  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:46.459195  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.945456  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:46.945538  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:46.959212  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:47.444711  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:47.444819  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:47.459189  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:47.944651  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:47.944726  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:47.958733  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:48.445008  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:48.445100  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:48.460126  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:48.944649  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:48.944731  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:48.959993  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:49.444545  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:49.444628  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:49.458889  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.011657  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:48.508465  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:46.215715  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:46.715657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.216495  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.715491  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.215459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.715556  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.215675  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.716046  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.215993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.715594  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.489578  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:52.990638  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:49.945108  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:49.945265  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:49.960625  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:50.444843  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:50.444923  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:50.459329  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:50.944871  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:50.944963  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:50.959583  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:51.444601  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:51.444704  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:51.462037  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:51.944573  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:51.944658  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:51.958538  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:52.445111  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:52.445269  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:52.462902  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:52.945088  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:52.945182  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:52.960241  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.444649  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:53.444738  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:53.458642  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.945214  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:53.945291  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:53.960552  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.960588  369508 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:53.960600  369508 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:53.960615  369508 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:53.960671  369508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:54.005230  369508 cri.go:89] found id: ""
	I0229 02:31:54.005321  369508 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:54.027544  369508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:54.040517  369508 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:54.040577  369508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:54.051200  369508 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:54.051223  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:54.168817  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:50.509119  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:52.509526  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:54.511540  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:51.215927  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:51.715888  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.215659  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.715769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.216175  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.715755  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.216468  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.216280  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.715924  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.992721  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:57.490570  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:55.091652  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.346578  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.443373  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.542444  369508 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:55.542562  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.042870  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.542972  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.571776  369508 api_server.go:72] duration metric: took 1.029332492s to wait for apiserver process to appear ...
	I0229 02:31:56.571808  369508 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:56.571831  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:56.572606  369508 api_server.go:269] stopped: https://192.168.50.218:8443/healthz: Get "https://192.168.50.218:8443/healthz": dial tcp 192.168.50.218:8443: connect: connection refused
	I0229 02:31:57.072145  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.557011  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:59.557048  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:59.557066  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.609944  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:59.610010  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:59.610028  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.669911  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:59.669955  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:57.010655  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:59.510097  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:00.071971  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:00.084661  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:32:00.084690  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:32:00.572262  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:00.577772  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:32:00.577807  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:32:01.072371  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:01.077306  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0229 02:32:01.084492  369508 api_server.go:141] control plane version: v1.28.4
	I0229 02:32:01.084531  369508 api_server.go:131] duration metric: took 4.512702749s to wait for apiserver health ...
	I0229 02:32:01.084544  369508 cni.go:84] Creating CNI manager for ""
	I0229 02:32:01.084554  369508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:32:01.086337  369508 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:56.215653  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.715898  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.215954  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.216366  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.716093  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.215944  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.715553  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.216341  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.715677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.087584  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:32:01.099724  369508 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:32:01.122381  369508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:32:01.133632  369508 system_pods.go:59] 8 kube-system pods found
	I0229 02:32:01.133674  369508 system_pods.go:61] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:32:01.133684  369508 system_pods.go:61] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:32:01.133697  369508 system_pods.go:61] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:32:01.133710  369508 system_pods.go:61] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:32:01.133720  369508 system_pods.go:61] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:32:01.133728  369508 system_pods.go:61] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:32:01.133738  369508 system_pods.go:61] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:32:01.133746  369508 system_pods.go:61] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:32:01.133755  369508 system_pods.go:74] duration metric: took 11.346225ms to wait for pod list to return data ...
	I0229 02:32:01.133767  369508 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:32:01.138716  369508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:32:01.138746  369508 node_conditions.go:123] node cpu capacity is 2
	I0229 02:32:01.138760  369508 node_conditions.go:105] duration metric: took 4.985648ms to run NodePressure ...
	I0229 02:32:01.138783  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:32:01.368503  369508 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:32:01.373648  369508 kubeadm.go:787] kubelet initialised
	I0229 02:32:01.373669  369508 kubeadm.go:788] duration metric: took 5.137378ms waiting for restarted kubelet to initialise ...
	I0229 02:32:01.373677  369508 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:01.379649  369508 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.384724  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.384750  369508 pod_ready.go:81] duration metric: took 5.071017ms waiting for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.384758  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.384765  369508 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.390019  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "etcd-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.390048  369508 pod_ready.go:81] duration metric: took 5.27491ms waiting for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.390059  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "etcd-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.390067  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.396275  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.396294  369508 pod_ready.go:81] duration metric: took 6.218856ms waiting for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.396302  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.396307  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.525881  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.525914  369508 pod_ready.go:81] duration metric: took 129.596783ms waiting for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.525927  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.525935  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.926806  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-proxy-6tt7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.926843  369508 pod_ready.go:81] duration metric: took 400.889304ms waiting for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.926856  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-proxy-6tt7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.926864  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:02.326588  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.326621  369508 pod_ready.go:81] duration metric: took 399.74816ms waiting for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:02.326633  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.326639  369508 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:02.727730  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.727759  369508 pod_ready.go:81] duration metric: took 401.108694ms waiting for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:02.727769  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.727776  369508 pod_ready.go:38] duration metric: took 1.354090438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:02.727795  369508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:32:02.742069  369508 ops.go:34] apiserver oom_adj: -16
	I0229 02:32:02.742097  369508 kubeadm.go:640] restartCluster took 18.823823408s
	I0229 02:32:02.742107  369508 kubeadm.go:406] StartCluster complete in 18.876382148s
	I0229 02:32:02.742127  369508 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:02.742271  369508 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:32:02.744032  369508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:02.744292  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:32:02.744429  369508 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:32:02.744507  369508 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-915633"
	I0229 02:32:02.744526  369508 addons.go:69] Setting default-storageclass=true in profile "embed-certs-915633"
	I0229 02:32:02.744540  369508 addons.go:69] Setting metrics-server=true in profile "embed-certs-915633"
	I0229 02:32:02.744550  369508 addons.go:234] Setting addon metrics-server=true in "embed-certs-915633"
	I0229 02:32:02.744555  369508 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-915633"
	W0229 02:32:02.744558  369508 addons.go:243] addon metrics-server should already be in state true
	I0229 02:32:02.744619  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.744532  369508 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-915633"
	W0229 02:32:02.744735  369508 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:32:02.744853  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.744682  369508 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:32:02.745085  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745113  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.745121  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745175  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.745339  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745416  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.749865  369508 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-915633" context rescaled to 1 replicas
	I0229 02:32:02.749907  369508 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:32:02.751823  369508 out.go:177] * Verifying Kubernetes components...
	I0229 02:32:02.753296  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:32:02.762688  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0229 02:32:02.763050  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0229 02:32:02.763274  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.763693  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.763872  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.763895  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.763963  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0229 02:32:02.764307  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.764337  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.764554  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.764592  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.764665  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.765103  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.765135  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.765144  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.765170  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.765481  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.765495  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.765863  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.766129  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.769253  369508 addons.go:234] Setting addon default-storageclass=true in "embed-certs-915633"
	W0229 02:32:02.769274  369508 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:32:02.769295  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.769578  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.769607  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.787345  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35577
	I0229 02:32:02.787806  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.788243  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.788266  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.789755  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33629
	I0229 02:32:02.790272  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.790361  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0229 02:32:02.790634  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.790727  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.791027  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.791192  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.791206  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.791367  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.791402  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.791705  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.791924  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.792315  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.792987  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.793026  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.793278  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.795128  369508 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:32:02.794105  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.796451  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:32:02.796472  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:32:02.796496  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.797812  369508 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:59.493919  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:01.989683  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:02.799249  369508 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:32:02.799270  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:32:02.799289  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.800109  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.800960  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.801015  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.801300  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.801496  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.801635  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.801763  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.802278  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.802617  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.802645  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.802836  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.803026  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.803174  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.803390  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.818656  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I0229 02:32:02.819105  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.819606  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.819625  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.820022  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.820366  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.822054  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.822412  369508 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:32:02.822432  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:32:02.822451  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.825579  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.826260  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.826293  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.826463  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.826614  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.826761  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.826954  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.911316  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:32:02.945655  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:32:02.945683  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:32:02.981318  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:32:02.981352  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:32:02.983632  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:32:03.009561  369508 node_ready.go:35] waiting up to 6m0s for node "embed-certs-915633" to be "Ready" ...
	I0229 02:32:03.009586  369508 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:32:03.044265  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:32:03.044293  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:32:03.094073  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:32:04.287008  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.3033415s)
	I0229 02:32:04.287081  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287094  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287375  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.37602435s)
	I0229 02:32:04.287416  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287428  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287440  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287463  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287478  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287487  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287750  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287800  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287828  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287861  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287805  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287914  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287834  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.287774  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.289370  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.289377  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.289397  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.293892  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.293919  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.294180  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.294198  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.294212  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.376595  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.28244915s)
	I0229 02:32:04.376679  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.376710  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.377004  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.377022  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.377031  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.377039  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.377037  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.377275  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.377319  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.377331  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.377348  369508 addons.go:470] Verifying addon metrics-server=true in "embed-certs-915633"
	I0229 02:32:04.380194  369508 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:32:04.381510  369508 addons.go:505] enable addons completed in 1.637082823s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:32:02.010578  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:04.509975  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:01.216197  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.716302  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.216170  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.715615  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.216580  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.716088  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.215743  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.716142  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.216543  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.715853  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.991440  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:05.992389  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:08.491225  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:05.014879  369508 node_ready.go:58] node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:07.518854  369508 node_ready.go:58] node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:07.009085  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:09.009296  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:06.216206  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:06.715748  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.215964  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.716419  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.216034  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.715611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.216207  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.716408  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.216144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.716454  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.491751  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:12.991326  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:10.013574  369508 node_ready.go:49] node "embed-certs-915633" has status "Ready":"True"
	I0229 02:32:10.013605  369508 node_ready.go:38] duration metric: took 7.004009102s waiting for node "embed-certs-915633" to be "Ready" ...
	I0229 02:32:10.013617  369508 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:10.020332  369508 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.025740  369508 pod_ready.go:92] pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:10.025766  369508 pod_ready.go:81] duration metric: took 5.403764ms waiting for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.025778  369508 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.534182  369508 pod_ready.go:92] pod "etcd-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:10.534212  369508 pod_ready.go:81] duration metric: took 508.426559ms waiting for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.534238  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:11.048997  369508 pod_ready.go:92] pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:11.049027  369508 pod_ready.go:81] duration metric: took 514.780048ms waiting for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:11.049040  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:13.056477  369508 pod_ready.go:102] pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:11.010305  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:13.011477  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:11.215611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:11.716198  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.216332  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.716413  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.216407  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.716466  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.216182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.716285  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.215995  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.715613  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.491511  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:17.494485  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:15.056064  369508 pod_ready.go:92] pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.056093  369508 pod_ready.go:81] duration metric: took 4.007044542s waiting for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.056104  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.061418  369508 pod_ready.go:92] pod "kube-proxy-6tt7l" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.061440  369508 pod_ready.go:81] duration metric: took 5.329971ms waiting for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.061451  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.578305  369508 pod_ready.go:92] pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.578332  369508 pod_ready.go:81] duration metric: took 516.873281ms waiting for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.578341  369508 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:17.585624  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:19.586470  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:15.510630  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:18.010381  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:16.215530  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:16.716420  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.216031  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.716303  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.216082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.715523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.216166  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.716503  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.215680  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.715770  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.989766  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.989821  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.586820  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:23.587119  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:20.509895  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:23.010371  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.215523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:21.715617  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.216133  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.716029  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.216141  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.715578  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.215640  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.715601  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.215959  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.716394  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.990493  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:25.990911  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:28.489681  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:26.085933  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:28.086754  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:25.508765  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:27.508956  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:29.512409  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:26.215946  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:26.715834  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.216243  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.715581  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.215521  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.715849  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.716497  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.215657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.715492  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.490400  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:32.990250  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:30.586107  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:33.086852  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:31.518170  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:34.009514  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:31.216322  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:31.716160  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.215557  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.715618  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.215761  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.716216  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.216460  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.716244  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.215551  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.715633  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.990305  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.990956  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:35.585472  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:37.586652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.509112  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:38.509634  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.215910  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:36.716307  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:37.216308  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:37.216404  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:37.262324  370051 cri.go:89] found id: ""
	I0229 02:32:37.262358  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.262370  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:37.262378  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:37.262442  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:37.303758  370051 cri.go:89] found id: ""
	I0229 02:32:37.303790  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.303802  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:37.303809  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:37.303880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:37.349512  370051 cri.go:89] found id: ""
	I0229 02:32:37.349538  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.349546  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:37.349553  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:37.349607  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:37.389630  370051 cri.go:89] found id: ""
	I0229 02:32:37.389657  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.389668  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:37.389676  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:37.389752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:37.435918  370051 cri.go:89] found id: ""
	I0229 02:32:37.435954  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.435967  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:37.435976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:37.436044  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:37.479336  370051 cri.go:89] found id: ""
	I0229 02:32:37.479369  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.479377  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:37.479384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:37.479460  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:37.519944  370051 cri.go:89] found id: ""
	I0229 02:32:37.519979  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.519991  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:37.519999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:37.520071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:37.563848  370051 cri.go:89] found id: ""
	I0229 02:32:37.563875  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.563884  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:37.563895  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:37.563915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:37.607989  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:37.608025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:37.660272  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:37.660324  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:37.676878  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:37.676909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:37.805099  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:37.805132  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:37.805159  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.378467  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:40.393066  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:40.393221  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:40.432592  370051 cri.go:89] found id: ""
	I0229 02:32:40.432619  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.432628  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:40.432634  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:40.432693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:40.473651  370051 cri.go:89] found id: ""
	I0229 02:32:40.473706  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.473716  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:40.473722  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:40.473781  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:40.520262  370051 cri.go:89] found id: ""
	I0229 02:32:40.520292  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.520303  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:40.520312  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:40.520374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:40.560359  370051 cri.go:89] found id: ""
	I0229 02:32:40.560393  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.560402  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:40.560408  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:40.560474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:40.602145  370051 cri.go:89] found id: ""
	I0229 02:32:40.602173  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.602181  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:40.602187  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:40.602266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:40.640744  370051 cri.go:89] found id: ""
	I0229 02:32:40.640778  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.640791  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:40.640799  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:40.640869  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:40.681863  370051 cri.go:89] found id: ""
	I0229 02:32:40.681895  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.681908  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:40.681916  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:40.681985  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:40.725859  370051 cri.go:89] found id: ""
	I0229 02:32:40.725890  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.725899  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:40.725910  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:40.725924  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.794666  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:40.794705  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:40.854173  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:40.854215  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:40.901744  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:40.901786  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:40.925331  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:40.925371  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:41.005785  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:39.491292  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:41.494077  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:40.086540  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:42.584644  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:44.587012  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:41.010764  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:43.510128  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:43.506756  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:43.522038  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:43.522135  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:43.559609  370051 cri.go:89] found id: ""
	I0229 02:32:43.559635  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.559642  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:43.559649  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:43.559707  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:43.609059  370051 cri.go:89] found id: ""
	I0229 02:32:43.609087  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.609096  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:43.609102  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:43.609159  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:43.648988  370051 cri.go:89] found id: ""
	I0229 02:32:43.649018  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.649029  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:43.649037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:43.649104  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:43.690995  370051 cri.go:89] found id: ""
	I0229 02:32:43.691028  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.691042  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:43.691054  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:43.691120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:43.729221  370051 cri.go:89] found id: ""
	I0229 02:32:43.729249  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.729257  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:43.729263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:43.729334  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:43.767141  370051 cri.go:89] found id: ""
	I0229 02:32:43.767174  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.767186  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:43.767194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:43.767266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:43.807926  370051 cri.go:89] found id: ""
	I0229 02:32:43.807962  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.807970  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:43.807976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:43.808029  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:43.857945  370051 cri.go:89] found id: ""
	I0229 02:32:43.857973  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.857981  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:43.857991  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:43.858005  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:43.941290  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:43.941338  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:43.986788  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:43.986823  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:44.037384  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:44.037421  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:44.052668  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:44.052696  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:44.127124  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:43.990179  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:45.990921  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:47.991525  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:47.086821  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:49.585987  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:45.510273  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:48.009067  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:50.011776  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:46.627409  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:46.642306  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:46.642397  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:46.685358  370051 cri.go:89] found id: ""
	I0229 02:32:46.685389  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.685400  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:46.685431  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:46.685493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:46.724996  370051 cri.go:89] found id: ""
	I0229 02:32:46.725026  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.725035  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:46.725041  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:46.725113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:46.765815  370051 cri.go:89] found id: ""
	I0229 02:32:46.765849  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.765857  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:46.765863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:46.765924  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:46.808946  370051 cri.go:89] found id: ""
	I0229 02:32:46.808980  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.808991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:46.809000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:46.809068  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:46.865068  370051 cri.go:89] found id: ""
	I0229 02:32:46.865106  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.865119  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:46.865127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:46.865200  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:46.932233  370051 cri.go:89] found id: ""
	I0229 02:32:46.932260  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.932268  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:46.932275  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:46.932331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:46.985701  370051 cri.go:89] found id: ""
	I0229 02:32:46.985732  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.985744  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:46.985752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:46.985819  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:47.027497  370051 cri.go:89] found id: ""
	I0229 02:32:47.027524  370051 logs.go:276] 0 containers: []
	W0229 02:32:47.027536  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:47.027548  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:47.027565  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:47.075955  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:47.075990  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:47.093922  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:47.093949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:47.165000  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:47.165029  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:47.165046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:47.250161  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:47.250201  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:49.794654  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:49.809706  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:49.809787  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:49.868163  370051 cri.go:89] found id: ""
	I0229 02:32:49.868197  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.868217  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:49.868223  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:49.868277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:49.928462  370051 cri.go:89] found id: ""
	I0229 02:32:49.928495  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.928508  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:49.928516  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:49.928580  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:49.975725  370051 cri.go:89] found id: ""
	I0229 02:32:49.975755  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.975765  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:49.975774  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:49.975849  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:50.017007  370051 cri.go:89] found id: ""
	I0229 02:32:50.017036  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.017046  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:50.017051  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:50.017118  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:50.054522  370051 cri.go:89] found id: ""
	I0229 02:32:50.054551  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.054560  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:50.054566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:50.054620  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:50.096274  370051 cri.go:89] found id: ""
	I0229 02:32:50.096300  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.096308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:50.096319  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:50.096382  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:50.142543  370051 cri.go:89] found id: ""
	I0229 02:32:50.142581  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.142590  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:50.142597  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:50.142667  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:50.182452  370051 cri.go:89] found id: ""
	I0229 02:32:50.182482  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.182492  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:50.182505  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:50.182522  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:50.266311  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:50.266355  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:50.309277  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:50.309322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:50.360492  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:50.360536  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:50.376711  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:50.376744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:50.447128  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:49.992032  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.490801  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:51.586053  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:53.586268  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.510054  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:54.510975  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.947926  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:52.970209  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:52.970317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:53.010840  370051 cri.go:89] found id: ""
	I0229 02:32:53.010868  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.010878  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:53.010886  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:53.010983  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:53.049458  370051 cri.go:89] found id: ""
	I0229 02:32:53.049490  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.049503  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:53.049511  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:53.049578  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:53.088615  370051 cri.go:89] found id: ""
	I0229 02:32:53.088646  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.088656  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:53.088671  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:53.088738  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:53.130176  370051 cri.go:89] found id: ""
	I0229 02:32:53.130210  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.130237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:53.130247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:53.130317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:53.177876  370051 cri.go:89] found id: ""
	I0229 02:32:53.177908  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.177920  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:53.177928  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:53.177991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:53.216036  370051 cri.go:89] found id: ""
	I0229 02:32:53.216065  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.216074  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:53.216080  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:53.216143  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:53.254673  370051 cri.go:89] found id: ""
	I0229 02:32:53.254705  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.254716  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:53.254724  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:53.254785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:53.291508  370051 cri.go:89] found id: ""
	I0229 02:32:53.291539  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.291551  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:53.291564  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:53.291581  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:53.343312  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:53.343354  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:53.359264  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:53.359294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:53.431396  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:53.431428  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:53.431445  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:53.512494  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:53.512529  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:56.057340  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:56.073074  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:56.073158  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:56.111650  370051 cri.go:89] found id: ""
	I0229 02:32:56.111684  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.111704  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:56.111713  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:56.111785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:54.990490  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:56.991005  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:55.587290  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:58.086312  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:57.008288  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:59.011396  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:56.150147  370051 cri.go:89] found id: ""
	I0229 02:32:56.150178  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.150191  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:56.150200  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:56.150280  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:56.192842  370051 cri.go:89] found id: ""
	I0229 02:32:56.192878  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.192890  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:56.192898  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:56.192969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:56.232013  370051 cri.go:89] found id: ""
	I0229 02:32:56.232051  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.232062  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:56.232079  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:56.232151  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:56.273824  370051 cri.go:89] found id: ""
	I0229 02:32:56.273858  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.273871  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:56.273882  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:56.273949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:56.312112  370051 cri.go:89] found id: ""
	I0229 02:32:56.312139  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.312147  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:56.312153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:56.312203  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:56.352558  370051 cri.go:89] found id: ""
	I0229 02:32:56.352585  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.352593  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:56.352600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:56.352666  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:56.397719  370051 cri.go:89] found id: ""
	I0229 02:32:56.397762  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.397775  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:56.397790  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:56.397808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:56.447793  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:56.447831  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:56.463859  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:56.463894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:56.540306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:56.540333  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:56.540347  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:56.633201  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:56.633247  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.207459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:59.222165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:59.222271  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:59.261197  370051 cri.go:89] found id: ""
	I0229 02:32:59.261230  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.261242  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:59.261251  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:59.261338  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:59.300874  370051 cri.go:89] found id: ""
	I0229 02:32:59.300917  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.300940  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:59.300950  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:59.301025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:59.345399  370051 cri.go:89] found id: ""
	I0229 02:32:59.345435  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.345446  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:59.345455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:59.345525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:59.386068  370051 cri.go:89] found id: ""
	I0229 02:32:59.386102  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.386112  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:59.386132  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:59.386184  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:59.436597  370051 cri.go:89] found id: ""
	I0229 02:32:59.436629  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.436641  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:59.436649  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:59.436708  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:59.481417  370051 cri.go:89] found id: ""
	I0229 02:32:59.481446  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.481462  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:59.481469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:59.481535  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:59.527725  370051 cri.go:89] found id: ""
	I0229 02:32:59.527752  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.527763  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:59.527771  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:59.527845  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:59.574502  370051 cri.go:89] found id: ""
	I0229 02:32:59.574535  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.574547  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:59.574561  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:59.574579  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:59.669584  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:59.669630  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.730049  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:59.730096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:59.779562  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:59.779613  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:59.797016  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:59.797046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:59.876438  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:58.991584  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:01.489321  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:03.489615  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:00.585463  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:02.587986  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:04.588479  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:01.509980  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:04.009579  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:02.377144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:02.391585  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:02.391682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:02.432359  370051 cri.go:89] found id: ""
	I0229 02:33:02.432390  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.432399  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:02.432406  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:02.432462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:02.476733  370051 cri.go:89] found id: ""
	I0229 02:33:02.476768  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.476781  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:02.476790  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:02.476856  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:02.521414  370051 cri.go:89] found id: ""
	I0229 02:33:02.521440  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.521448  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:02.521454  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:02.521513  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:02.561663  370051 cri.go:89] found id: ""
	I0229 02:33:02.561690  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.561698  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:02.561704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:02.561755  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:02.611953  370051 cri.go:89] found id: ""
	I0229 02:33:02.611989  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.612002  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:02.612010  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:02.612079  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:02.663254  370051 cri.go:89] found id: ""
	I0229 02:33:02.663282  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.663290  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:02.663297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:02.663348  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:02.721449  370051 cri.go:89] found id: ""
	I0229 02:33:02.721484  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.721497  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:02.721506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:02.721579  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:02.761197  370051 cri.go:89] found id: ""
	I0229 02:33:02.761239  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.761251  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:02.761265  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:02.761282  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:02.810457  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:02.810498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:02.828906  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:02.828940  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:02.911895  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:02.911932  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:02.911945  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:02.995120  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:02.995152  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:05.544629  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:05.559266  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:05.559342  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:05.609673  370051 cri.go:89] found id: ""
	I0229 02:33:05.609706  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.609718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:05.609727  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:05.609795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:05.665161  370051 cri.go:89] found id: ""
	I0229 02:33:05.665192  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.665203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:05.665211  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:05.665282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:05.719923  370051 cri.go:89] found id: ""
	I0229 02:33:05.719949  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.719957  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:05.719963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:05.720025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:05.765189  370051 cri.go:89] found id: ""
	I0229 02:33:05.765224  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.765237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:05.765245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:05.765357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:05.803788  370051 cri.go:89] found id: ""
	I0229 02:33:05.803820  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.803829  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:05.803836  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:05.803909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:05.842152  370051 cri.go:89] found id: ""
	I0229 02:33:05.842178  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.842188  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:05.842197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:05.842278  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:05.885042  370051 cri.go:89] found id: ""
	I0229 02:33:05.885071  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.885084  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:05.885092  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:05.885156  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:05.926032  370051 cri.go:89] found id: ""
	I0229 02:33:05.926069  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.926082  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:05.926096  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:05.926112  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:06.014702  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:06.014744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:06.063510  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:06.063550  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:06.114215  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:06.114272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:06.130132  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:06.130169  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:33:05.490726  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:07.491068  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:07.085225  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:09.087524  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:06.508469  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:08.509399  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	W0229 02:33:06.205692  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:08.706549  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:08.722548  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:08.722614  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:08.768518  370051 cri.go:89] found id: ""
	I0229 02:33:08.768553  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.768564  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:08.768572  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:08.768630  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:08.804600  370051 cri.go:89] found id: ""
	I0229 02:33:08.804630  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.804643  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:08.804651  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:08.804721  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:08.842466  370051 cri.go:89] found id: ""
	I0229 02:33:08.842497  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.842510  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:08.842518  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:08.842589  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:08.878384  370051 cri.go:89] found id: ""
	I0229 02:33:08.878412  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.878421  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:08.878427  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:08.878484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:08.924228  370051 cri.go:89] found id: ""
	I0229 02:33:08.924262  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.924275  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:08.924295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:08.924374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:08.966122  370051 cri.go:89] found id: ""
	I0229 02:33:08.966157  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.966168  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:08.966177  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:08.966254  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:09.011109  370051 cri.go:89] found id: ""
	I0229 02:33:09.011135  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.011144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:09.011152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:09.011217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:09.059716  370051 cri.go:89] found id: ""
	I0229 02:33:09.059749  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.059782  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:09.059795  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:09.059812  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:09.110564  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:09.110599  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:09.126037  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:09.126065  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:09.199827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:09.199858  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:09.199892  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:09.282624  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:09.282661  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:09.990502  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.991783  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.586475  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:13.586740  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:10.511051  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:12.512644  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:15.009478  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.829017  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:11.842826  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:11.842894  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:11.881652  370051 cri.go:89] found id: ""
	I0229 02:33:11.881689  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.881700  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:11.881709  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:11.881773  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:11.919252  370051 cri.go:89] found id: ""
	I0229 02:33:11.919291  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.919302  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:11.919309  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:11.919380  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:11.959145  370051 cri.go:89] found id: ""
	I0229 02:33:11.959175  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.959187  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:11.959196  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:11.959263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:12.002105  370051 cri.go:89] found id: ""
	I0229 02:33:12.002134  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.002145  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:12.002153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:12.002219  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:12.042157  370051 cri.go:89] found id: ""
	I0229 02:33:12.042188  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.042221  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:12.042249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:12.042326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:12.080121  370051 cri.go:89] found id: ""
	I0229 02:33:12.080150  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.080158  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:12.080165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:12.080231  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:12.119259  370051 cri.go:89] found id: ""
	I0229 02:33:12.119286  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.119294  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:12.119301  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:12.119357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:12.160136  370051 cri.go:89] found id: ""
	I0229 02:33:12.160171  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.160182  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:12.160195  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:12.160209  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:12.209770  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:12.209810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:12.226429  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:12.226460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:12.295933  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:12.295966  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:12.295978  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:12.380794  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:12.380843  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:14.971692  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:14.986085  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:14.986162  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:15.024756  370051 cri.go:89] found id: ""
	I0229 02:33:15.024788  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.024801  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:15.024809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:15.024868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:15.065131  370051 cri.go:89] found id: ""
	I0229 02:33:15.065159  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.065172  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:15.065180  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:15.065251  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:15.104744  370051 cri.go:89] found id: ""
	I0229 02:33:15.104775  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.104786  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:15.104794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:15.104858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:15.145710  370051 cri.go:89] found id: ""
	I0229 02:33:15.145737  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.145745  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:15.145752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:15.145803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:15.184908  370051 cri.go:89] found id: ""
	I0229 02:33:15.184933  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.184942  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:15.184951  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:15.185016  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:15.230195  370051 cri.go:89] found id: ""
	I0229 02:33:15.230220  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.230241  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:15.230249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:15.230326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:15.269750  370051 cri.go:89] found id: ""
	I0229 02:33:15.269774  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.269783  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:15.269789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:15.269852  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:15.312331  370051 cri.go:89] found id: ""
	I0229 02:33:15.312360  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.312373  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:15.312387  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:15.312402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:15.363032  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:15.363067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:15.422421  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:15.422463  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:15.445235  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:15.445272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:15.530010  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:15.530047  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:15.530066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:14.489188  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:16.991028  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:16.090733  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:18.587045  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:17.510766  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:20.009379  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:18.116265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:18.130375  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:18.130439  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:18.167740  370051 cri.go:89] found id: ""
	I0229 02:33:18.167767  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.167776  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:18.167782  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:18.167843  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:18.205621  370051 cri.go:89] found id: ""
	I0229 02:33:18.205653  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.205662  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:18.205670  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:18.205725  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:18.246917  370051 cri.go:89] found id: ""
	I0229 02:33:18.246954  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.246975  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:18.246983  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:18.247040  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:18.285087  370051 cri.go:89] found id: ""
	I0229 02:33:18.285114  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.285123  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:18.285130  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:18.285181  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:18.323989  370051 cri.go:89] found id: ""
	I0229 02:33:18.324018  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.324027  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:18.324033  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:18.324094  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:18.372741  370051 cri.go:89] found id: ""
	I0229 02:33:18.372769  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.372779  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:18.372785  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:18.372838  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:18.432846  370051 cri.go:89] found id: ""
	I0229 02:33:18.432888  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.432900  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:18.432908  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:18.432977  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:18.486357  370051 cri.go:89] found id: ""
	I0229 02:33:18.486387  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.486399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:18.486411  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:18.486431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:18.532363  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:18.532402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:18.582035  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:18.582076  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:18.599009  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:18.599050  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:18.673580  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:18.673609  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:18.673625  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:19.490704  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.990251  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.085541  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:23.086148  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:22.009826  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:24.509388  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.259614  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:21.274150  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:21.274250  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:21.311859  370051 cri.go:89] found id: ""
	I0229 02:33:21.311895  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.311908  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:21.311917  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:21.311984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:21.364260  370051 cri.go:89] found id: ""
	I0229 02:33:21.364296  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.364309  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:21.364317  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:21.364391  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:21.424181  370051 cri.go:89] found id: ""
	I0229 02:33:21.424217  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.424229  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:21.424237  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:21.424306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:21.482499  370051 cri.go:89] found id: ""
	I0229 02:33:21.482531  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.482543  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:21.482551  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:21.482621  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:21.523743  370051 cri.go:89] found id: ""
	I0229 02:33:21.523775  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.523785  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:21.523793  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:21.523868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:21.563759  370051 cri.go:89] found id: ""
	I0229 02:33:21.563789  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.563800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:21.563809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:21.563889  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:21.610162  370051 cri.go:89] found id: ""
	I0229 02:33:21.610265  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.610286  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:21.610295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:21.610378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:21.652001  370051 cri.go:89] found id: ""
	I0229 02:33:21.652028  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.652037  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:21.652047  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:21.652060  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:21.704028  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:21.704067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:21.720924  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:21.720956  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:21.798619  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:21.798645  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:21.798664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:21.888445  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:21.888506  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.437647  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:24.459963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:24.460041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:24.503906  370051 cri.go:89] found id: ""
	I0229 02:33:24.503940  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.503950  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:24.503956  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:24.504031  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:24.541893  370051 cri.go:89] found id: ""
	I0229 02:33:24.541919  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.541929  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:24.541935  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:24.541991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:24.584717  370051 cri.go:89] found id: ""
	I0229 02:33:24.584748  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.584760  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:24.584769  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:24.584836  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:24.623334  370051 cri.go:89] found id: ""
	I0229 02:33:24.623362  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.623371  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:24.623378  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:24.623447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:24.665862  370051 cri.go:89] found id: ""
	I0229 02:33:24.665890  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.665902  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:24.665911  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:24.665984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:24.705509  370051 cri.go:89] found id: ""
	I0229 02:33:24.705540  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.705551  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:24.705560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:24.705634  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:24.745348  370051 cri.go:89] found id: ""
	I0229 02:33:24.745389  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.745399  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:24.745406  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:24.745462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:24.785490  370051 cri.go:89] found id: ""
	I0229 02:33:24.785520  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.785529  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:24.785539  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:24.785553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.829556  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:24.829589  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:24.877914  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:24.877949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:24.894590  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:24.894623  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:24.972948  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:24.972981  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:24.972997  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:23.990806  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:26.489823  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:25.586684  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:27.588321  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:26.509932  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:29.010692  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:27.555364  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:27.570747  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:27.570820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:27.609771  370051 cri.go:89] found id: ""
	I0229 02:33:27.609800  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.609807  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:27.609813  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:27.609863  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:27.654316  370051 cri.go:89] found id: ""
	I0229 02:33:27.654347  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.654360  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:27.654376  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:27.654453  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:27.695089  370051 cri.go:89] found id: ""
	I0229 02:33:27.695125  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.695137  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:27.695143  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:27.695199  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:27.733846  370051 cri.go:89] found id: ""
	I0229 02:33:27.733881  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.733893  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:27.733901  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:27.733972  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:27.772906  370051 cri.go:89] found id: ""
	I0229 02:33:27.772940  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.772953  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:27.772961  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:27.773039  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:27.812266  370051 cri.go:89] found id: ""
	I0229 02:33:27.812295  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.812308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:27.812316  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:27.812387  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:27.849272  370051 cri.go:89] found id: ""
	I0229 02:33:27.849305  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.849316  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:27.849324  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:27.849393  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:27.887495  370051 cri.go:89] found id: ""
	I0229 02:33:27.887528  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.887541  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:27.887554  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:27.887569  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:27.972220  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:27.972261  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:28.020757  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:28.020797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:28.070347  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:28.070381  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:28.089905  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:28.089947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:28.183306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:30.683857  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:30.701341  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:30.701443  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:30.741342  370051 cri.go:89] found id: ""
	I0229 02:33:30.741376  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.741387  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:30.741397  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:30.741475  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:30.785372  370051 cri.go:89] found id: ""
	I0229 02:33:30.785415  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.785427  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:30.785435  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:30.785506  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:30.828402  370051 cri.go:89] found id: ""
	I0229 02:33:30.828428  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.828436  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:30.828442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:30.828504  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:30.872656  370051 cri.go:89] found id: ""
	I0229 02:33:30.872684  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.872695  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:30.872704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:30.872770  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:30.918746  370051 cri.go:89] found id: ""
	I0229 02:33:30.918775  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.918786  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:30.918794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:30.918867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:30.956794  370051 cri.go:89] found id: ""
	I0229 02:33:30.956838  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.956852  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:30.956860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:30.956935  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:31.000595  370051 cri.go:89] found id: ""
	I0229 02:33:31.000618  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.000628  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:31.000637  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:31.000699  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:31.039060  370051 cri.go:89] found id: ""
	I0229 02:33:31.039089  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.039100  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:31.039111  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:31.039133  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:31.089919  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:31.089949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:31.110276  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:31.110315  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:33:28.990807  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:30.993882  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:33.489703  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:30.086658  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:32.586407  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:34.588272  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:31.509534  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:33.511710  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	W0229 02:33:31.235760  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:31.235791  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:31.235810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:31.323257  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:31.323322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:33.872956  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:33.887953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:33.888034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:33.927887  370051 cri.go:89] found id: ""
	I0229 02:33:33.927926  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.927938  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:33.927945  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:33.928001  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:33.967301  370051 cri.go:89] found id: ""
	I0229 02:33:33.967333  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.967345  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:33.967356  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:33.967425  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:34.009949  370051 cri.go:89] found id: ""
	I0229 02:33:34.009982  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.009992  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:34.009999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:34.010073  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:34.056197  370051 cri.go:89] found id: ""
	I0229 02:33:34.056224  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.056232  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:34.056239  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:34.056314  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:34.107089  370051 cri.go:89] found id: ""
	I0229 02:33:34.107120  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.107132  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:34.107140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:34.107206  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:34.162822  370051 cri.go:89] found id: ""
	I0229 02:33:34.162856  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.162875  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:34.162884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:34.162961  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:34.209963  370051 cri.go:89] found id: ""
	I0229 02:33:34.209993  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.210001  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:34.210008  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:34.210078  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:34.250688  370051 cri.go:89] found id: ""
	I0229 02:33:34.250726  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.250735  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:34.250754  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:34.250768  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:34.298953  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:34.298993  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:34.314067  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:34.314100  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:34.393515  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:34.393536  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:34.393551  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:34.477034  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:34.477078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:35.990175  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:38.490651  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:37.087261  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:39.588400  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:36.009933  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:38.508929  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:37.025152  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:37.040410  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:37.040491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:37.077922  370051 cri.go:89] found id: ""
	I0229 02:33:37.077953  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.077965  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:37.077973  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:37.078041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:37.137895  370051 cri.go:89] found id: ""
	I0229 02:33:37.137925  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.137938  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:37.137946  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:37.138012  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:37.199291  370051 cri.go:89] found id: ""
	I0229 02:33:37.199324  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.199336  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:37.199344  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:37.199422  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:37.242817  370051 cri.go:89] found id: ""
	I0229 02:33:37.242848  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.242857  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:37.242863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:37.242917  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:37.282171  370051 cri.go:89] found id: ""
	I0229 02:33:37.282196  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.282204  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:37.282211  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:37.282284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:37.328608  370051 cri.go:89] found id: ""
	I0229 02:33:37.328639  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.328647  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:37.328658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:37.328724  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:37.372965  370051 cri.go:89] found id: ""
	I0229 02:33:37.372996  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.373008  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:37.373016  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:37.373091  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:37.417597  370051 cri.go:89] found id: ""
	I0229 02:33:37.417630  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.417642  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:37.417655  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:37.417673  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:37.472023  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:37.472058  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:37.487931  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:37.487961  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:37.568196  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:37.568227  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:37.568245  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:37.658485  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:37.658523  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.203039  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:40.220385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:40.220477  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:40.262962  370051 cri.go:89] found id: ""
	I0229 02:33:40.262993  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.263004  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:40.263016  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:40.263086  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:40.302452  370051 cri.go:89] found id: ""
	I0229 02:33:40.302483  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.302495  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:40.302503  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:40.302560  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:40.342509  370051 cri.go:89] found id: ""
	I0229 02:33:40.342544  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.342557  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:40.342566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:40.342644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:40.385585  370051 cri.go:89] found id: ""
	I0229 02:33:40.385615  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.385629  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:40.385638  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:40.385703  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:40.426839  370051 cri.go:89] found id: ""
	I0229 02:33:40.426874  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.426887  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:40.426896  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:40.426962  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:40.467217  370051 cri.go:89] found id: ""
	I0229 02:33:40.467241  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.467251  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:40.467257  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:40.467332  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:40.513525  370051 cri.go:89] found id: ""
	I0229 02:33:40.513546  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.513553  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:40.513559  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:40.513609  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:40.554187  370051 cri.go:89] found id: ""
	I0229 02:33:40.554256  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.554269  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:40.554282  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:40.554301  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:40.636447  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:40.636477  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:40.636494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:40.716381  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:40.716423  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.761946  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:40.761982  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:40.812828  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:40.812862  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:40.492178  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.991517  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.086413  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:44.586663  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:40.510266  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.510702  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:45.013362  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:43.336139  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:43.352278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:43.352361  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:43.392555  370051 cri.go:89] found id: ""
	I0229 02:33:43.392593  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.392607  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:43.392616  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:43.392689  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:43.438169  370051 cri.go:89] found id: ""
	I0229 02:33:43.438202  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.438216  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:43.438242  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:43.438331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:43.476987  370051 cri.go:89] found id: ""
	I0229 02:33:43.477021  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.477033  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:43.477042  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:43.477109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:43.526728  370051 cri.go:89] found id: ""
	I0229 02:33:43.526758  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.526767  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:43.526778  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:43.526833  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:43.572222  370051 cri.go:89] found id: ""
	I0229 02:33:43.572260  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.572273  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:43.572282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:43.572372  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:43.618650  370051 cri.go:89] found id: ""
	I0229 02:33:43.618679  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.618691  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:43.618698  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:43.618764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:43.658069  370051 cri.go:89] found id: ""
	I0229 02:33:43.658104  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.658116  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:43.658126  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:43.658196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:43.700790  370051 cri.go:89] found id: ""
	I0229 02:33:43.700829  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.700841  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:43.700855  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:43.700874  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:43.753330  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:43.753372  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:43.770261  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:43.770294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:43.842407  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:43.842430  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:43.842447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:43.935427  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:43.935470  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:45.490296  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.490514  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.088903  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:49.585902  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.510105  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:49.511420  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:46.498694  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:46.516463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:46.516541  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:46.554731  370051 cri.go:89] found id: ""
	I0229 02:33:46.554757  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.554766  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:46.554772  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:46.554835  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:46.596851  370051 cri.go:89] found id: ""
	I0229 02:33:46.596892  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.596905  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:46.596912  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:46.596981  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:46.634978  370051 cri.go:89] found id: ""
	I0229 02:33:46.635008  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.635017  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:46.635024  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:46.635089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:46.675302  370051 cri.go:89] found id: ""
	I0229 02:33:46.675334  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.675347  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:46.675355  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:46.675423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:46.717366  370051 cri.go:89] found id: ""
	I0229 02:33:46.717402  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.717413  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:46.717421  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:46.717484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:46.756130  370051 cri.go:89] found id: ""
	I0229 02:33:46.756160  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.756169  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:46.756176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:46.756228  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:46.794283  370051 cri.go:89] found id: ""
	I0229 02:33:46.794312  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.794320  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:46.794328  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:46.794384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:46.836646  370051 cri.go:89] found id: ""
	I0229 02:33:46.836679  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.836691  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:46.836703  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:46.836721  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:46.926532  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:46.926578  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:46.981883  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:46.981915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:47.033571  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:47.033612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:47.049803  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:47.049833  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:47.123389  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:49.623827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:49.638175  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:49.638263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:49.675895  370051 cri.go:89] found id: ""
	I0229 02:33:49.675929  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.675941  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:49.675950  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:49.676009  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:49.720679  370051 cri.go:89] found id: ""
	I0229 02:33:49.720718  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.720730  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:49.720739  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:49.720808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:49.762299  370051 cri.go:89] found id: ""
	I0229 02:33:49.762329  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.762342  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:49.762350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:49.762426  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:49.809330  370051 cri.go:89] found id: ""
	I0229 02:33:49.809364  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.809376  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:49.809391  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:49.809455  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:49.859176  370051 cri.go:89] found id: ""
	I0229 02:33:49.859206  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.859218  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:49.859226  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:49.859292  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:49.914844  370051 cri.go:89] found id: ""
	I0229 02:33:49.914877  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.914890  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:49.914897  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:49.914967  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:49.969640  370051 cri.go:89] found id: ""
	I0229 02:33:49.969667  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.969676  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:49.969682  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:49.969736  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:50.010924  370051 cri.go:89] found id: ""
	I0229 02:33:50.010953  370051 logs.go:276] 0 containers: []
	W0229 02:33:50.010965  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:50.010976  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:50.011002  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:50.089462  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:50.089494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:50.132098  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:50.132129  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:50.182693  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:50.182737  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:50.198209  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:50.198256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:50.281521  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:49.991831  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:52.489891  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:51.586298  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:53.587249  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:51.513176  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:54.010209  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:52.781677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:52.795962  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:52.796055  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:52.833670  370051 cri.go:89] found id: ""
	I0229 02:33:52.833706  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.833718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:52.833728  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:52.833795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:52.889497  370051 cri.go:89] found id: ""
	I0229 02:33:52.889529  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.889539  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:52.889547  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:52.889616  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:52.952880  370051 cri.go:89] found id: ""
	I0229 02:33:52.952915  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.952927  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:52.952935  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:52.953002  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:53.008380  370051 cri.go:89] found id: ""
	I0229 02:33:53.008409  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.008420  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:53.008434  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:53.008502  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:53.047877  370051 cri.go:89] found id: ""
	I0229 02:33:53.047911  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.047922  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:53.047931  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:53.047999  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:53.086080  370051 cri.go:89] found id: ""
	I0229 02:33:53.086107  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.086118  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:53.086127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:53.086193  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:53.128334  370051 cri.go:89] found id: ""
	I0229 02:33:53.128368  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.128378  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:53.128385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:53.128457  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:53.172201  370051 cri.go:89] found id: ""
	I0229 02:33:53.172232  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.172245  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:53.172258  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:53.172275  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:53.222608  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:53.222648  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:53.239888  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:53.239918  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:53.315827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:53.315850  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:53.315864  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:53.395457  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:53.395498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:55.943418  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:55.960562  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:55.960638  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:56.005181  370051 cri.go:89] found id: ""
	I0229 02:33:56.005210  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.005221  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:56.005229  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:56.005293  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:56.046700  370051 cri.go:89] found id: ""
	I0229 02:33:56.046731  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.046743  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:56.046750  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:56.046814  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:56.088459  370051 cri.go:89] found id: ""
	I0229 02:33:56.088486  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.088497  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:56.088505  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:56.088571  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:56.127729  370051 cri.go:89] found id: ""
	I0229 02:33:56.127762  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.127774  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:56.127783  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:56.127862  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:54.491536  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.493973  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.089188  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:58.586570  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.011539  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:58.509708  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.169980  370051 cri.go:89] found id: ""
	I0229 02:33:56.170011  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.170022  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:56.170030  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:56.170098  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:56.210650  370051 cri.go:89] found id: ""
	I0229 02:33:56.210682  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.210694  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:56.210704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:56.210771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:56.247342  370051 cri.go:89] found id: ""
	I0229 02:33:56.247380  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.247391  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:56.247400  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:56.247474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:56.286322  370051 cri.go:89] found id: ""
	I0229 02:33:56.286353  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.286364  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:56.286375  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:56.286393  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:56.335144  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:56.335184  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:56.351322  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:56.351359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:56.424251  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:56.424282  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:56.424299  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:56.506053  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:56.506082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.052805  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:59.067508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:59.067599  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:59.114213  370051 cri.go:89] found id: ""
	I0229 02:33:59.114256  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.114268  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:59.114276  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:59.114327  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:59.161087  370051 cri.go:89] found id: ""
	I0229 02:33:59.161123  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.161136  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:59.161145  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:59.161217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:59.206071  370051 cri.go:89] found id: ""
	I0229 02:33:59.206101  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.206114  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:59.206122  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:59.206196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:59.245152  370051 cri.go:89] found id: ""
	I0229 02:33:59.245179  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.245188  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:59.245194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:59.245247  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:59.286047  370051 cri.go:89] found id: ""
	I0229 02:33:59.286080  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.286092  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:59.286101  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:59.286165  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:59.323171  370051 cri.go:89] found id: ""
	I0229 02:33:59.323203  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.323214  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:59.323222  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:59.323288  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:59.364434  370051 cri.go:89] found id: ""
	I0229 02:33:59.364464  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.364477  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:59.364485  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:59.364554  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:59.405902  370051 cri.go:89] found id: ""
	I0229 02:33:59.405929  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.405938  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:59.405948  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:59.405980  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:59.481810  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:59.481841  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:59.481858  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:59.575726  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:59.575767  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.634808  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:59.634849  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:59.702513  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:59.702552  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:58.991152  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:01.490426  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:00.587747  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:02.594677  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:01.010009  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:03.509687  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:02.219660  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:02.234037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:02.234105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:02.277956  370051 cri.go:89] found id: ""
	I0229 02:34:02.277982  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.277991  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:02.277998  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:02.278071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:02.322832  370051 cri.go:89] found id: ""
	I0229 02:34:02.322856  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.322869  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:02.322878  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:02.322949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:02.368612  370051 cri.go:89] found id: ""
	I0229 02:34:02.368646  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.368659  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:02.368668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:02.368731  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:02.412436  370051 cri.go:89] found id: ""
	I0229 02:34:02.412466  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.412479  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:02.412486  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:02.412544  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:02.448682  370051 cri.go:89] found id: ""
	I0229 02:34:02.448713  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.448724  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:02.448733  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:02.448803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:02.486676  370051 cri.go:89] found id: ""
	I0229 02:34:02.486705  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.486723  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:02.486730  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:02.486795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:02.531814  370051 cri.go:89] found id: ""
	I0229 02:34:02.531841  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.531852  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:02.531860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:02.531934  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:02.569800  370051 cri.go:89] found id: ""
	I0229 02:34:02.569835  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.569845  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:02.569857  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:02.569871  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:02.623903  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:02.623937  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:02.643856  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:02.643884  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:02.735520  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:02.735544  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:02.735563  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:02.816572  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:02.816612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.371459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:05.385179  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:05.385255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:05.424653  370051 cri.go:89] found id: ""
	I0229 02:34:05.424679  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.424687  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:05.424694  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:05.424752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:05.463726  370051 cri.go:89] found id: ""
	I0229 02:34:05.463754  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.463763  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:05.463769  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:05.463823  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:05.510367  370051 cri.go:89] found id: ""
	I0229 02:34:05.510396  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.510407  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:05.510415  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:05.510480  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:05.548421  370051 cri.go:89] found id: ""
	I0229 02:34:05.548445  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.548455  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:05.548461  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:05.548527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:05.588778  370051 cri.go:89] found id: ""
	I0229 02:34:05.588801  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.588809  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:05.588815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:05.588875  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:05.638449  370051 cri.go:89] found id: ""
	I0229 02:34:05.638479  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.638490  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:05.638506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:05.638567  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:05.709921  370051 cri.go:89] found id: ""
	I0229 02:34:05.709950  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.709964  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:05.709972  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:05.710038  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:05.756965  370051 cri.go:89] found id: ""
	I0229 02:34:05.756992  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.757000  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:05.757010  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:05.757025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:05.826878  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:05.826904  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:05.826921  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:05.909205  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:05.909256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.954537  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:05.954594  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:06.004157  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:06.004203  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:03.989381  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.990323  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.491379  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.086296  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:07.586477  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.511758  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.009545  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:10.010247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.522975  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:08.539247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:08.539326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:08.579776  370051 cri.go:89] found id: ""
	I0229 02:34:08.579806  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.579817  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:08.579826  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:08.579890  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:08.628415  370051 cri.go:89] found id: ""
	I0229 02:34:08.628444  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.628456  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:08.628468  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:08.628534  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:08.690499  370051 cri.go:89] found id: ""
	I0229 02:34:08.690530  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.690540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:08.690547  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:08.690613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:08.739755  370051 cri.go:89] found id: ""
	I0229 02:34:08.739788  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.739801  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:08.739809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:08.739906  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:08.781693  370051 cri.go:89] found id: ""
	I0229 02:34:08.781721  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.781733  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:08.781742  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:08.781808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:08.818605  370051 cri.go:89] found id: ""
	I0229 02:34:08.818637  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.818645  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:08.818652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:08.818713  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:08.861533  370051 cri.go:89] found id: ""
	I0229 02:34:08.861559  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.861569  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:08.861578  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:08.861658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:08.902727  370051 cri.go:89] found id: ""
	I0229 02:34:08.902758  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.902771  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:08.902784  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:08.902801  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:08.948527  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:08.948567  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:08.999883  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:08.999916  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:09.015438  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:09.015467  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:09.087965  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:09.087994  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:09.088010  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:10.990135  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.991074  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:10.085517  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.086653  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:14.086817  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.510247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:15.010412  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:11.671443  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:11.702197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:11.702322  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:11.755104  370051 cri.go:89] found id: ""
	I0229 02:34:11.755136  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.755147  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:11.755153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:11.755204  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:11.794190  370051 cri.go:89] found id: ""
	I0229 02:34:11.794218  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.794239  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:11.794247  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:11.794310  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:11.837330  370051 cri.go:89] found id: ""
	I0229 02:34:11.837360  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.837372  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:11.837380  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:11.837447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:11.876682  370051 cri.go:89] found id: ""
	I0229 02:34:11.876716  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.876726  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:11.876734  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:11.876805  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:11.922172  370051 cri.go:89] found id: ""
	I0229 02:34:11.922239  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.922262  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:11.922271  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:11.922341  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:11.962218  370051 cri.go:89] found id: ""
	I0229 02:34:11.962270  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.962283  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:11.962291  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:11.962375  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:12.002075  370051 cri.go:89] found id: ""
	I0229 02:34:12.002101  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.002110  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:12.002117  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:12.002169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:12.043337  370051 cri.go:89] found id: ""
	I0229 02:34:12.043378  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.043399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:12.043412  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:12.043428  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:12.094458  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:12.094491  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:12.112374  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:12.112401  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:12.193665  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:12.193689  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:12.193717  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:12.282510  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:12.282553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:14.828451  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:14.843626  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:14.843690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:14.884181  370051 cri.go:89] found id: ""
	I0229 02:34:14.884214  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.884226  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:14.884235  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:14.884302  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:14.926312  370051 cri.go:89] found id: ""
	I0229 02:34:14.926347  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.926361  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:14.926369  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:14.926436  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:14.969147  370051 cri.go:89] found id: ""
	I0229 02:34:14.969182  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.969195  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:14.969207  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:14.969277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:15.013000  370051 cri.go:89] found id: ""
	I0229 02:34:15.013045  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.013055  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:15.013064  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:15.013120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:15.055811  370051 cri.go:89] found id: ""
	I0229 02:34:15.055849  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.055861  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:15.055869  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:15.055939  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:15.100736  370051 cri.go:89] found id: ""
	I0229 02:34:15.100768  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.100780  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:15.100789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:15.100867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:15.140115  370051 cri.go:89] found id: ""
	I0229 02:34:15.140151  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.140164  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:15.140172  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:15.140239  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:15.183545  370051 cri.go:89] found id: ""
	I0229 02:34:15.183576  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.183588  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:15.183602  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:15.183621  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:15.258646  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:15.258676  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:15.258693  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:15.347035  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:15.347082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:15.407148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:15.407178  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:15.466695  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:15.466741  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:15.490797  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.990851  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:16.585993  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:18.587604  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.509114  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:19.509856  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.989102  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:18.005052  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:18.005126  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:18.044687  370051 cri.go:89] found id: ""
	I0229 02:34:18.044714  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.044725  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:18.044739  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:18.044815  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:18.085904  370051 cri.go:89] found id: ""
	I0229 02:34:18.085934  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.085944  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:18.085952  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:18.086017  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:18.129958  370051 cri.go:89] found id: ""
	I0229 02:34:18.129985  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.129994  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:18.129999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:18.130052  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:18.166942  370051 cri.go:89] found id: ""
	I0229 02:34:18.166979  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.166991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:18.167000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:18.167056  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:18.205297  370051 cri.go:89] found id: ""
	I0229 02:34:18.205324  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.205331  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:18.205337  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:18.205410  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:18.246415  370051 cri.go:89] found id: ""
	I0229 02:34:18.246448  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.246461  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:18.246469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:18.246527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:18.285534  370051 cri.go:89] found id: ""
	I0229 02:34:18.285573  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.285585  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:18.285600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:18.285662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:18.327624  370051 cri.go:89] found id: ""
	I0229 02:34:18.327651  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.327659  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:18.327670  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:18.327684  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:18.383307  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:18.383351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:18.408127  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:18.408162  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:18.502036  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:18.502070  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:18.502093  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:18.582289  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:18.582340  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:20.490582  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:22.990210  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.086446  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:23.586600  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.511411  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:24.009976  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.135649  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:21.149411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:21.149498  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:21.198246  370051 cri.go:89] found id: ""
	I0229 02:34:21.198286  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.198298  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:21.198306  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:21.198378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:21.240168  370051 cri.go:89] found id: ""
	I0229 02:34:21.240195  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.240203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:21.240209  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:21.240275  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:21.281243  370051 cri.go:89] found id: ""
	I0229 02:34:21.281277  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.281288  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:21.281296  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:21.281359  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:21.321573  370051 cri.go:89] found id: ""
	I0229 02:34:21.321609  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.321621  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:21.321629  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:21.321693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:21.375156  370051 cri.go:89] found id: ""
	I0229 02:34:21.375212  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.375226  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:21.375234  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:21.375308  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:21.430450  370051 cri.go:89] found id: ""
	I0229 02:34:21.430487  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.430499  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:21.430508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:21.430576  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:21.475095  370051 cri.go:89] found id: ""
	I0229 02:34:21.475124  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.475135  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:21.475144  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:21.475215  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:21.517378  370051 cri.go:89] found id: ""
	I0229 02:34:21.517403  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.517412  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:21.517424  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:21.517444  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:21.534103  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:21.534147  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:21.608375  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:21.608400  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:21.608412  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:21.691912  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:21.691950  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:21.744366  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:21.744406  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.295384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:24.309456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:24.309539  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:24.370125  370051 cri.go:89] found id: ""
	I0229 02:34:24.370156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.370167  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:24.370175  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:24.370256  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:24.439458  370051 cri.go:89] found id: ""
	I0229 02:34:24.439487  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.439499  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:24.439506  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:24.439639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:24.478070  370051 cri.go:89] found id: ""
	I0229 02:34:24.478105  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.478119  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:24.478127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:24.478194  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:24.517128  370051 cri.go:89] found id: ""
	I0229 02:34:24.517156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.517168  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:24.517176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:24.517243  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:24.555502  370051 cri.go:89] found id: ""
	I0229 02:34:24.555537  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.555549  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:24.555557  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:24.555625  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:24.601261  370051 cri.go:89] found id: ""
	I0229 02:34:24.601295  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.601307  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:24.601315  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:24.601389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:24.639110  370051 cri.go:89] found id: ""
	I0229 02:34:24.639141  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.639153  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:24.639161  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:24.639224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:24.681448  370051 cri.go:89] found id: ""
	I0229 02:34:24.681478  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.681487  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:24.681498  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:24.681517  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.730735  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:24.730775  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:24.746996  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:24.747031  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:24.827581  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:24.827608  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:24.827628  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:24.909551  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:24.909596  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:24.990581  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.489787  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:25.586672  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.586999  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:26.509819  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:29.009014  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.455967  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:27.477411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:27.477487  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:27.523163  370051 cri.go:89] found id: ""
	I0229 02:34:27.523189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.523198  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:27.523203  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:27.523258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:27.562298  370051 cri.go:89] found id: ""
	I0229 02:34:27.562330  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.562343  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:27.562350  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:27.562420  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:27.603506  370051 cri.go:89] found id: ""
	I0229 02:34:27.603532  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.603540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:27.603554  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:27.603619  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:27.646971  370051 cri.go:89] found id: ""
	I0229 02:34:27.647002  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.647014  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:27.647031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:27.647109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:27.685124  370051 cri.go:89] found id: ""
	I0229 02:34:27.685149  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.685160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:27.685169  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:27.685235  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:27.726976  370051 cri.go:89] found id: ""
	I0229 02:34:27.727007  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.727018  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:27.727026  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:27.727089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:27.767159  370051 cri.go:89] found id: ""
	I0229 02:34:27.767189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.767197  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:27.767204  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:27.767272  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:27.810377  370051 cri.go:89] found id: ""
	I0229 02:34:27.810411  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.810420  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:27.810431  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:27.810447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:27.858094  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:27.858136  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:27.874407  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:27.874440  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:27.953065  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:27.953092  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:27.953108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:28.042244  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:28.042278  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:30.588227  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:30.604954  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:30.605037  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:30.642069  370051 cri.go:89] found id: ""
	I0229 02:34:30.642100  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.642108  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:30.642119  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:30.642187  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:30.686212  370051 cri.go:89] found id: ""
	I0229 02:34:30.686264  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.686277  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:30.686285  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:30.686364  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:30.726668  370051 cri.go:89] found id: ""
	I0229 02:34:30.726702  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.726715  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:30.726723  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:30.726788  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:30.766850  370051 cri.go:89] found id: ""
	I0229 02:34:30.766883  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.766895  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:30.766904  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:30.766979  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:30.808972  370051 cri.go:89] found id: ""
	I0229 02:34:30.809002  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.809015  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:30.809023  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:30.809093  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:30.851992  370051 cri.go:89] found id: ""
	I0229 02:34:30.852016  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.852025  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:30.852031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:30.852096  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:30.891100  370051 cri.go:89] found id: ""
	I0229 02:34:30.891132  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.891144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:30.891157  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:30.891227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:30.931740  370051 cri.go:89] found id: ""
	I0229 02:34:30.931768  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.931777  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:30.931787  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:30.931808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:31.010896  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:31.010919  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:31.010936  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:31.094626  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:31.094662  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:29.490211  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.490659  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:30.086898  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:32.587485  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.010003  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:33.510267  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.150765  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:31.150804  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:31.202932  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:31.202976  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:33.723355  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:33.738651  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:33.738753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:33.778255  370051 cri.go:89] found id: ""
	I0229 02:34:33.778287  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.778299  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:33.778307  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:33.778384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:33.818360  370051 cri.go:89] found id: ""
	I0229 02:34:33.818396  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.818406  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:33.818412  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:33.818564  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:33.866781  370051 cri.go:89] found id: ""
	I0229 02:34:33.866814  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.866824  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:33.866831  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:33.866891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:33.910013  370051 cri.go:89] found id: ""
	I0229 02:34:33.910051  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.910063  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:33.910072  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:33.910146  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:33.956068  370051 cri.go:89] found id: ""
	I0229 02:34:33.956098  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.956106  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:33.956113  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:33.956170  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:34.004997  370051 cri.go:89] found id: ""
	I0229 02:34:34.005027  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.005038  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:34.005047  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:34.005113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:34.059266  370051 cri.go:89] found id: ""
	I0229 02:34:34.059293  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.059302  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:34.059307  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:34.059363  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:34.105601  370051 cri.go:89] found id: ""
	I0229 02:34:34.105631  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.105643  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:34.105654  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:34.105669  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:34.208723  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:34.208764  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:34.262105  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:34.262137  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:34.314528  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:34.314571  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:34.332441  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:34.332477  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:34.406303  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:33.990257  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.490844  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:35.085482  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:37.086532  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:39.087022  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.015574  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:38.510064  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.906814  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:36.922297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:36.922377  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:36.967550  370051 cri.go:89] found id: ""
	I0229 02:34:36.967578  370051 logs.go:276] 0 containers: []
	W0229 02:34:36.967589  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:36.967599  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:36.967662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:37.007589  370051 cri.go:89] found id: ""
	I0229 02:34:37.007614  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.007624  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:37.007632  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:37.007706  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:37.048230  370051 cri.go:89] found id: ""
	I0229 02:34:37.048260  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.048273  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:37.048281  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:37.048354  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:37.089329  370051 cri.go:89] found id: ""
	I0229 02:34:37.089355  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.089365  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:37.089373  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:37.089441  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:37.144654  370051 cri.go:89] found id: ""
	I0229 02:34:37.144687  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.144699  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:37.144708  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:37.144778  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:37.203822  370051 cri.go:89] found id: ""
	I0229 02:34:37.203857  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.203868  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:37.203876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:37.203948  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:37.250369  370051 cri.go:89] found id: ""
	I0229 02:34:37.250398  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.250410  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:37.250417  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:37.250490  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:37.290924  370051 cri.go:89] found id: ""
	I0229 02:34:37.290957  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.290969  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:37.290981  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:37.290995  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:37.343878  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:37.343920  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:37.359307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:37.359336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:37.435264  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:37.435292  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:37.435309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:37.518274  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:37.518309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:40.062232  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:40.079883  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:40.079957  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:40.123826  370051 cri.go:89] found id: ""
	I0229 02:34:40.123856  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.123866  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:40.123874  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:40.123943  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:40.190273  370051 cri.go:89] found id: ""
	I0229 02:34:40.190321  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.190332  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:40.190338  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:40.190395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:40.232921  370051 cri.go:89] found id: ""
	I0229 02:34:40.232949  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.232961  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:40.232968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:40.233034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:40.273490  370051 cri.go:89] found id: ""
	I0229 02:34:40.273517  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.273526  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:40.273538  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:40.273594  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:40.317121  370051 cri.go:89] found id: ""
	I0229 02:34:40.317152  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.317163  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:40.317171  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:40.317230  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:40.363347  370051 cri.go:89] found id: ""
	I0229 02:34:40.363380  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.363389  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:40.363396  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:40.363459  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:40.407187  370051 cri.go:89] found id: ""
	I0229 02:34:40.407213  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.407222  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:40.407231  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:40.407282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:40.447185  370051 cri.go:89] found id: ""
	I0229 02:34:40.447218  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.447229  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:40.447242  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:40.447258  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:40.496998  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:40.497029  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:40.512520  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:40.512549  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:40.589150  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:40.589173  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:40.589190  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:40.677054  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:40.677096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:38.991307  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:40.992688  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.490195  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:41.585962  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.586942  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:41.009837  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.510138  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.222265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:43.236567  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:43.236629  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:43.282917  370051 cri.go:89] found id: ""
	I0229 02:34:43.282959  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.282976  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:43.282982  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:43.283049  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:43.329273  370051 cri.go:89] found id: ""
	I0229 02:34:43.329302  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.329313  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:43.329321  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:43.329386  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:43.366696  370051 cri.go:89] found id: ""
	I0229 02:34:43.366723  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.366732  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:43.366739  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:43.366800  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:43.405793  370051 cri.go:89] found id: ""
	I0229 02:34:43.405820  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.405828  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:43.405834  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:43.405888  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:43.442870  370051 cri.go:89] found id: ""
	I0229 02:34:43.442898  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.442906  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:43.442912  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:43.442964  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:43.484581  370051 cri.go:89] found id: ""
	I0229 02:34:43.484615  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.484626  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:43.484635  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:43.484702  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:43.530931  370051 cri.go:89] found id: ""
	I0229 02:34:43.530954  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.530963  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:43.530968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:43.531024  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:43.572810  370051 cri.go:89] found id: ""
	I0229 02:34:43.572838  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.572850  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:43.572867  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:43.572883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:43.622815  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:43.622854  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:43.637972  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:43.638012  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:43.713704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:43.713728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:43.713746  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:43.797178  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:43.797220  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:45.490670  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:47.989828  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:45.587464  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:48.090384  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:46.009454  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:48.010403  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:46.347159  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:46.361601  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:46.361682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:46.399751  370051 cri.go:89] found id: ""
	I0229 02:34:46.399784  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.399795  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:46.399804  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:46.399870  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:46.445367  370051 cri.go:89] found id: ""
	I0229 02:34:46.445398  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.445407  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:46.445413  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:46.445486  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:46.490323  370051 cri.go:89] found id: ""
	I0229 02:34:46.490363  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.490385  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:46.490393  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:46.490473  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:46.531406  370051 cri.go:89] found id: ""
	I0229 02:34:46.531441  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.531450  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:46.531456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:46.531507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:46.572759  370051 cri.go:89] found id: ""
	I0229 02:34:46.572787  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.572795  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:46.572804  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:46.572908  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:46.613055  370051 cri.go:89] found id: ""
	I0229 02:34:46.613083  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.613093  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:46.613099  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:46.613153  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:46.657504  370051 cri.go:89] found id: ""
	I0229 02:34:46.657536  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.657544  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:46.657550  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:46.657605  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:46.698008  370051 cri.go:89] found id: ""
	I0229 02:34:46.698057  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.698068  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:46.698080  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:46.698097  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:46.746648  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:46.746682  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:46.761190  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:46.761219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:46.843379  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:46.843403  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:46.843415  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:46.933493  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:46.933546  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:49.491837  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:49.508647  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:49.508717  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:49.550752  370051 cri.go:89] found id: ""
	I0229 02:34:49.550788  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.550800  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:49.550809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:49.550883  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:49.597623  370051 cri.go:89] found id: ""
	I0229 02:34:49.597663  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.597675  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:49.597683  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:49.597764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:49.635207  370051 cri.go:89] found id: ""
	I0229 02:34:49.635230  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.635238  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:49.635282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:49.635336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:49.674664  370051 cri.go:89] found id: ""
	I0229 02:34:49.674696  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.674708  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:49.674716  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:49.674777  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:49.715391  370051 cri.go:89] found id: ""
	I0229 02:34:49.715420  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.715433  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:49.715442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:49.715497  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:49.753318  370051 cri.go:89] found id: ""
	I0229 02:34:49.753352  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.753373  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:49.753382  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:49.753451  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:49.791342  370051 cri.go:89] found id: ""
	I0229 02:34:49.791369  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.791377  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:49.791384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:49.791456  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:49.838148  370051 cri.go:89] found id: ""
	I0229 02:34:49.838181  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.838191  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:49.838204  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:49.838244  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:49.891532  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:49.891568  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:49.917625  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:49.917664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:50.019436  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:50.019457  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:50.019472  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:50.108302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:50.108349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:49.991272  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.491139  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:50.586652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.586940  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:50.509504  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:53.010818  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.654561  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:52.668331  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:52.668402  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:52.718431  370051 cri.go:89] found id: ""
	I0229 02:34:52.718471  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.718484  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:52.718493  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:52.718551  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:52.757913  370051 cri.go:89] found id: ""
	I0229 02:34:52.757946  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.757957  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:52.757965  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:52.758035  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:52.796792  370051 cri.go:89] found id: ""
	I0229 02:34:52.796821  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.796833  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:52.796842  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:52.796913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:52.832157  370051 cri.go:89] found id: ""
	I0229 02:34:52.832187  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.832196  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:52.832203  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:52.832264  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:52.879170  370051 cri.go:89] found id: ""
	I0229 02:34:52.879197  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.879206  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:52.879212  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:52.879265  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:52.924219  370051 cri.go:89] found id: ""
	I0229 02:34:52.924249  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.924258  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:52.924264  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:52.924318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:52.980422  370051 cri.go:89] found id: ""
	I0229 02:34:52.980450  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.980457  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:52.980463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:52.980525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:53.026393  370051 cri.go:89] found id: ""
	I0229 02:34:53.026418  370051 logs.go:276] 0 containers: []
	W0229 02:34:53.026426  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:53.026436  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:53.026453  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:53.075135  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:53.075174  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:53.092197  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:53.092223  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:53.164397  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:53.164423  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:53.164439  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:53.250310  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:53.250366  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:55.792993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:55.807152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:55.807229  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:55.867791  370051 cri.go:89] found id: ""
	I0229 02:34:55.867821  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.867830  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:55.867847  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:55.867925  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:55.922960  370051 cri.go:89] found id: ""
	I0229 02:34:55.922989  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.923001  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:55.923009  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:55.923076  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:55.972510  370051 cri.go:89] found id: ""
	I0229 02:34:55.972541  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.972552  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:55.972560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:55.972632  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:56.011948  370051 cri.go:89] found id: ""
	I0229 02:34:56.011980  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.011990  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:56.011999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:56.012077  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:56.052624  370051 cri.go:89] found id: ""
	I0229 02:34:56.052653  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.052662  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:56.052668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:56.052722  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:56.089075  370051 cri.go:89] found id: ""
	I0229 02:34:56.089100  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.089108  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:56.089114  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:56.089180  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:56.130369  370051 cri.go:89] found id: ""
	I0229 02:34:56.130403  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.130416  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:56.130424  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:56.130496  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:54.989569  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:56.991424  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:55.085652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:57.585291  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:59.586439  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:55.509734  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:57.510165  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:59.511749  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:56.177812  370051 cri.go:89] found id: ""
	I0229 02:34:56.177843  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.177854  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:56.177875  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:56.177894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:56.224294  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:56.224336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:56.275874  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:56.275909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:56.291172  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:56.291202  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:56.364839  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:56.364870  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:56.364888  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:58.950871  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:58.966327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:58.966389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:59.005914  370051 cri.go:89] found id: ""
	I0229 02:34:59.005952  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.005968  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:59.005976  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:59.006045  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:59.043962  370051 cri.go:89] found id: ""
	I0229 02:34:59.043993  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.044005  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:59.044013  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:59.044167  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:59.089398  370051 cri.go:89] found id: ""
	I0229 02:34:59.089426  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.089434  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:59.089440  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:59.089491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:59.130786  370051 cri.go:89] found id: ""
	I0229 02:34:59.130815  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.130824  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:59.130830  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:59.130909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:59.174807  370051 cri.go:89] found id: ""
	I0229 02:34:59.174836  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.174848  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:59.174855  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:59.174929  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:59.217745  370051 cri.go:89] found id: ""
	I0229 02:34:59.217792  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.217800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:59.217806  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:59.217858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:59.260906  370051 cri.go:89] found id: ""
	I0229 02:34:59.260939  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.260950  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:59.260957  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:59.261025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:59.299114  370051 cri.go:89] found id: ""
	I0229 02:34:59.299140  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.299150  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:59.299161  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:59.299173  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:59.349630  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:59.349672  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:59.365679  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:59.365710  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:59.438234  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:59.438261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:59.438280  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:59.524185  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:59.524219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:58.991975  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:01.489719  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:03.490315  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.087731  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:04.585197  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.008802  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:04.509210  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.068320  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:02.082910  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:02.082988  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:02.122095  370051 cri.go:89] found id: ""
	I0229 02:35:02.122132  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.122145  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:02.122153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:02.122245  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:02.160982  370051 cri.go:89] found id: ""
	I0229 02:35:02.161013  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.161029  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:02.161043  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:02.161108  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:02.200603  370051 cri.go:89] found id: ""
	I0229 02:35:02.200637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.200650  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:02.200658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:02.200746  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:02.243100  370051 cri.go:89] found id: ""
	I0229 02:35:02.243126  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.243134  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:02.243140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:02.243207  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:02.282758  370051 cri.go:89] found id: ""
	I0229 02:35:02.282793  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.282806  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:02.282815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:02.282884  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:02.324402  370051 cri.go:89] found id: ""
	I0229 02:35:02.324434  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.324444  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:02.324455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:02.324520  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:02.368608  370051 cri.go:89] found id: ""
	I0229 02:35:02.368637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.368650  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:02.368658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:02.368726  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:02.411449  370051 cri.go:89] found id: ""
	I0229 02:35:02.411484  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.411497  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:02.411509  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:02.411526  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:02.427942  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:02.427974  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:02.498848  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:02.498884  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:02.498902  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:02.585701  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:02.585749  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:02.642055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:02.642096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.201769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:05.215944  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:05.216020  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:05.254080  370051 cri.go:89] found id: ""
	I0229 02:35:05.254107  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.254121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:05.254128  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:05.254179  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:05.296990  370051 cri.go:89] found id: ""
	I0229 02:35:05.297022  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.297034  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:05.297042  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:05.297111  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:05.336241  370051 cri.go:89] found id: ""
	I0229 02:35:05.336275  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.336290  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:05.336299  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:05.336395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:05.377620  370051 cri.go:89] found id: ""
	I0229 02:35:05.377649  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.377658  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:05.377664  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:05.377712  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:05.416275  370051 cri.go:89] found id: ""
	I0229 02:35:05.416303  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.416311  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:05.416318  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:05.416373  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:05.455375  370051 cri.go:89] found id: ""
	I0229 02:35:05.455412  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.455426  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:05.455436  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:05.455507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:05.495862  370051 cri.go:89] found id: ""
	I0229 02:35:05.495887  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.495897  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:05.495905  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:05.495969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:05.541218  370051 cri.go:89] found id: ""
	I0229 02:35:05.541247  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.541260  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:05.541273  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:05.541288  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:05.629982  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:05.630023  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:05.719026  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:05.719066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.785318  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:05.785359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:05.801181  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:05.801214  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:05.871333  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:05.490857  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:07.991044  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:06.587458  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:09.086313  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:06.510265  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:08.510391  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:08.371982  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:08.386451  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:08.386514  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:08.430045  370051 cri.go:89] found id: ""
	I0229 02:35:08.430077  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.430090  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:08.430099  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:08.430169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:08.470547  370051 cri.go:89] found id: ""
	I0229 02:35:08.470583  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.470596  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:08.470604  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:08.470671  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:08.512637  370051 cri.go:89] found id: ""
	I0229 02:35:08.512676  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.512687  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:08.512695  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:08.512759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:08.556228  370051 cri.go:89] found id: ""
	I0229 02:35:08.556263  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.556271  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:08.556277  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:08.556335  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:08.613838  370051 cri.go:89] found id: ""
	I0229 02:35:08.613868  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.613878  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:08.613884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:08.613940  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:08.686408  370051 cri.go:89] found id: ""
	I0229 02:35:08.686442  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.686454  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:08.686462  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:08.686519  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:08.725665  370051 cri.go:89] found id: ""
	I0229 02:35:08.725697  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.725710  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:08.725719  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:08.725776  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:08.765639  370051 cri.go:89] found id: ""
	I0229 02:35:08.765666  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.765674  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:08.765684  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:08.765695  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:08.813097  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:08.813135  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:08.828880  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:08.828909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:08.903237  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:08.903261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:08.903281  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:08.991710  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:08.991745  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:10.491022  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:12.491159  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.086828  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:13.586274  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.009650  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:13.011571  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.536724  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:11.551614  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:11.551690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:11.593078  370051 cri.go:89] found id: ""
	I0229 02:35:11.593110  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.593121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:11.593129  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:11.593185  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:11.645696  370051 cri.go:89] found id: ""
	I0229 02:35:11.645729  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.645742  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:11.645751  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:11.645820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:11.691181  370051 cri.go:89] found id: ""
	I0229 02:35:11.691213  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.691226  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:11.691245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:11.691318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:11.745906  370051 cri.go:89] found id: ""
	I0229 02:35:11.745933  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.745946  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:11.745953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:11.746019  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:11.784895  370051 cri.go:89] found id: ""
	I0229 02:35:11.784927  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.784940  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:11.784949  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:11.785025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:11.825341  370051 cri.go:89] found id: ""
	I0229 02:35:11.825372  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.825384  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:11.825392  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:11.825464  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:11.862454  370051 cri.go:89] found id: ""
	I0229 02:35:11.862492  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.862505  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:11.862523  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:11.862604  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:11.908424  370051 cri.go:89] found id: ""
	I0229 02:35:11.908450  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.908459  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:11.908469  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:11.908487  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:11.956274  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:11.956313  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:11.972363  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:11.972397  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:12.052030  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:12.052057  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:12.052078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:12.138388  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:12.138431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:14.691474  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:14.724652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:14.724739  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:14.765210  370051 cri.go:89] found id: ""
	I0229 02:35:14.765237  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.765246  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:14.765253  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:14.765306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:14.808226  370051 cri.go:89] found id: ""
	I0229 02:35:14.808258  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.808270  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:14.808287  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:14.808357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:14.847999  370051 cri.go:89] found id: ""
	I0229 02:35:14.848030  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.848041  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:14.848049  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:14.848123  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:14.887221  370051 cri.go:89] found id: ""
	I0229 02:35:14.887248  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.887256  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:14.887263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:14.887339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:14.929905  370051 cri.go:89] found id: ""
	I0229 02:35:14.929933  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.929950  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:14.929956  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:14.930011  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:14.969697  370051 cri.go:89] found id: ""
	I0229 02:35:14.969739  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.969761  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:14.969770  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:14.969837  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:15.013387  370051 cri.go:89] found id: ""
	I0229 02:35:15.013418  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.013429  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:15.013437  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:15.013493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:15.058199  370051 cri.go:89] found id: ""
	I0229 02:35:15.058240  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.058253  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:15.058270  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:15.058287  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:15.110165  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:15.110213  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:15.127417  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:15.127452  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:15.203330  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:15.203370  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:15.203405  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:15.283455  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:15.283501  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:14.991352  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.490127  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:15.586556  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:18.085962  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:15.509530  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.512518  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:20.009873  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.829187  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:17.844678  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:17.844759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:17.885549  370051 cri.go:89] found id: ""
	I0229 02:35:17.885581  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.885594  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:17.885601  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:17.885670  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:17.925652  370051 cri.go:89] found id: ""
	I0229 02:35:17.925679  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.925691  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:17.925699  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:17.925766  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:17.963172  370051 cri.go:89] found id: ""
	I0229 02:35:17.963203  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.963215  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:17.963224  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:17.963282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:18.003528  370051 cri.go:89] found id: ""
	I0229 02:35:18.003560  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.003572  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:18.003579  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:18.003644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:18.046494  370051 cri.go:89] found id: ""
	I0229 02:35:18.046526  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.046537  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:18.046545  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:18.046613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:18.084963  370051 cri.go:89] found id: ""
	I0229 02:35:18.084993  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.085004  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:18.085013  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:18.085074  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:18.125521  370051 cri.go:89] found id: ""
	I0229 02:35:18.125547  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.125556  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:18.125563  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:18.125623  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:18.169963  370051 cri.go:89] found id: ""
	I0229 02:35:18.169995  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.170006  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:18.170020  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:18.170035  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:18.225414  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:18.225460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:18.242069  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:18.242108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:18.312704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:18.312728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:18.312742  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:18.397206  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:18.397249  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:20.968000  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:20.983115  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:20.983196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:21.025710  370051 cri.go:89] found id: ""
	I0229 02:35:21.025735  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.025743  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:21.025749  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:21.025812  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:21.065825  370051 cri.go:89] found id: ""
	I0229 02:35:21.065854  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.065862  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:21.065868  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:21.065928  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:21.104738  370051 cri.go:89] found id: ""
	I0229 02:35:21.104770  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.104782  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:21.104790  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:21.104871  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:19.990622  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.491026  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.491059  369591 pod_ready.go:81] duration metric: took 4m0.008454624s waiting for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	E0229 02:35:22.491069  369591 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:35:22.491077  369591 pod_ready.go:38] duration metric: took 4m5.576507129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:22.491094  369591 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:35:22.491124  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:22.491174  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:22.562384  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:22.562412  369591 cri.go:89] found id: ""
	I0229 02:35:22.562422  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:22.562487  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.567997  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:22.568073  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:22.632786  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:22.632811  369591 cri.go:89] found id: ""
	I0229 02:35:22.632822  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:22.632887  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.637899  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:22.637975  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:22.681988  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:22.682014  369591 cri.go:89] found id: ""
	I0229 02:35:22.682024  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:22.682084  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.687515  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:22.687606  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:22.732907  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:22.732931  369591 cri.go:89] found id: ""
	I0229 02:35:22.732939  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:22.732995  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.737695  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:22.737758  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:22.779316  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:22.779341  369591 cri.go:89] found id: ""
	I0229 02:35:22.779349  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:22.779413  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.786533  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:22.786617  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:22.834391  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:22.834420  369591 cri.go:89] found id: ""
	I0229 02:35:22.834430  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:22.834500  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.839386  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:22.839458  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:22.881275  369591 cri.go:89] found id: ""
	I0229 02:35:22.881304  369591 logs.go:276] 0 containers: []
	W0229 02:35:22.881317  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:22.881326  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:22.881404  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:22.932822  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:22.932846  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:22.932850  369591 cri.go:89] found id: ""
	I0229 02:35:22.932858  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:22.932913  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.938541  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.943263  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:22.943288  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:22.994089  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:22.994122  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:23.051780  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:23.051821  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:23.099220  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:23.099251  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:23.157383  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:23.157429  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:23.206125  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:23.206180  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:23.261950  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:23.261982  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:23.324394  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:23.324427  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:23.400608  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:23.400648  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:20.589079  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:23.088469  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.510074  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:24.002388  369869 pod_ready.go:81] duration metric: took 4m0.000212386s waiting for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" ...
	E0229 02:35:24.002420  369869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:35:24.002439  369869 pod_ready.go:38] duration metric: took 4m6.701505951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:24.002490  369869 kubeadm.go:640] restartCluster took 4m24.423602043s
	W0229 02:35:24.002593  369869 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:35:24.002621  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:35:21.147180  370051 cri.go:89] found id: ""
	I0229 02:35:21.147211  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.147221  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:21.147228  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:21.147284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:21.187240  370051 cri.go:89] found id: ""
	I0229 02:35:21.187275  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.187287  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:21.187295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:21.187389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:21.228873  370051 cri.go:89] found id: ""
	I0229 02:35:21.228899  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.228917  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:21.228924  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:21.228992  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:21.268827  370051 cri.go:89] found id: ""
	I0229 02:35:21.268856  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.268867  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:21.268876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:21.268970  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:21.313253  370051 cri.go:89] found id: ""
	I0229 02:35:21.313288  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.313297  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:21.313307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:21.313328  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:21.448089  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:21.448120  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:21.448146  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:21.539941  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:21.539983  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:21.590148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:21.590186  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:21.647760  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:21.647797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:24.165842  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:24.183263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:24.183345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:24.233173  370051 cri.go:89] found id: ""
	I0229 02:35:24.233208  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.233219  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:24.233228  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:24.233301  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:24.276937  370051 cri.go:89] found id: ""
	I0229 02:35:24.276977  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.276989  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:24.276998  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:24.277066  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:24.314629  370051 cri.go:89] found id: ""
	I0229 02:35:24.314665  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.314678  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:24.314686  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:24.314753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:24.367585  370051 cri.go:89] found id: ""
	I0229 02:35:24.367618  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.367630  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:24.367639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:24.367709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:24.451128  370051 cri.go:89] found id: ""
	I0229 02:35:24.451151  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.451160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:24.451167  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:24.451258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:24.497302  370051 cri.go:89] found id: ""
	I0229 02:35:24.497336  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.497348  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:24.497357  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:24.497431  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:24.544593  370051 cri.go:89] found id: ""
	I0229 02:35:24.544621  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.544632  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:24.544640  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:24.544714  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:24.584570  370051 cri.go:89] found id: ""
	I0229 02:35:24.584601  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.584613  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:24.584626  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:24.584645  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:24.669019  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:24.669044  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:24.669061  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:24.752163  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:24.752205  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:24.811945  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:24.811985  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:24.874832  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:24.874873  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:23.928222  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:23.928275  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:23.983171  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:23.983216  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:23.999343  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:23.999382  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:24.180422  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:24.180476  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:26.745283  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:26.768785  369591 api_server.go:72] duration metric: took 4m17.549714658s to wait for apiserver process to appear ...
	I0229 02:35:26.768823  369591 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:35:26.768885  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:26.768949  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:26.816275  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:26.816303  369591 cri.go:89] found id: ""
	I0229 02:35:26.816314  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:26.816379  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.820985  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:26.821062  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:26.870520  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:26.870545  369591 cri.go:89] found id: ""
	I0229 02:35:26.870555  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:26.870613  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.875785  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:26.875869  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:26.926844  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:26.926884  369591 cri.go:89] found id: ""
	I0229 02:35:26.926895  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:26.926963  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.933667  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:26.933747  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:26.988547  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:26.988575  369591 cri.go:89] found id: ""
	I0229 02:35:26.988584  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:26.988645  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.994520  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:26.994600  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:27.040568  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:27.040602  369591 cri.go:89] found id: ""
	I0229 02:35:27.040612  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:27.040679  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.046103  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:27.046161  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:27.094322  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:27.094345  369591 cri.go:89] found id: ""
	I0229 02:35:27.094357  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:27.094428  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.101702  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:27.101779  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:27.164549  369591 cri.go:89] found id: ""
	I0229 02:35:27.164584  369591 logs.go:276] 0 containers: []
	W0229 02:35:27.164596  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:27.164604  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:27.164674  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:27.219403  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:27.219431  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:27.219436  369591 cri.go:89] found id: ""
	I0229 02:35:27.219447  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:27.219510  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.226705  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.233551  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:27.233576  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:27.281111  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:27.281152  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:27.333686  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:27.333738  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:27.948683  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:27.948736  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:28.018866  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:28.018917  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:28.164820  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:28.164857  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:28.222926  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:28.222963  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:28.265708  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:28.265738  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:28.309311  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:28.309352  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:28.363295  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:28.363341  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:28.384099  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:28.384146  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:28.451988  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:28.452025  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:28.499748  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:28.499783  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:25.586753  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:27.589329  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:27.392846  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:27.419255  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:27.419339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:27.465294  370051 cri.go:89] found id: ""
	I0229 02:35:27.465325  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.465337  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:27.465345  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:27.465417  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:27.533393  370051 cri.go:89] found id: ""
	I0229 02:35:27.533424  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.533433  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:27.533441  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:27.533510  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:27.587195  370051 cri.go:89] found id: ""
	I0229 02:35:27.587221  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.587232  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:27.587240  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:27.587313  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:27.638597  370051 cri.go:89] found id: ""
	I0229 02:35:27.638624  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.638632  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:27.638639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:27.638709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:27.687695  370051 cri.go:89] found id: ""
	I0229 02:35:27.687730  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.687742  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:27.687750  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:27.687825  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:27.732275  370051 cri.go:89] found id: ""
	I0229 02:35:27.732309  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.732320  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:27.732327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:27.732389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:27.783069  370051 cri.go:89] found id: ""
	I0229 02:35:27.783109  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.783122  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:27.783133  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:27.783224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:27.832385  370051 cri.go:89] found id: ""
	I0229 02:35:27.832416  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.832429  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:27.832443  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:27.832460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:27.902610  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:27.902658  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:27.919900  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:27.919947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:28.003313  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:28.003337  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:28.003356  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:28.100814  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:28.100853  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:30.654289  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:30.683056  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:30.683141  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:30.734678  370051 cri.go:89] found id: ""
	I0229 02:35:30.734704  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.734712  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:30.734719  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:30.734771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:30.780792  370051 cri.go:89] found id: ""
	I0229 02:35:30.780821  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.780830  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:30.780837  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:30.780904  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:30.827244  370051 cri.go:89] found id: ""
	I0229 02:35:30.827269  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.827278  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:30.827285  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:30.827336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:30.871305  370051 cri.go:89] found id: ""
	I0229 02:35:30.871333  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.871342  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:30.871348  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:30.871423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:30.910095  370051 cri.go:89] found id: ""
	I0229 02:35:30.910121  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.910130  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:30.910136  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:30.910188  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:30.955234  370051 cri.go:89] found id: ""
	I0229 02:35:30.955261  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.955271  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:30.955278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:30.955345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:30.996555  370051 cri.go:89] found id: ""
	I0229 02:35:30.996589  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.996602  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:30.996611  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:30.996687  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:31.036424  370051 cri.go:89] found id: ""
	I0229 02:35:31.036454  370051 logs.go:276] 0 containers: []
	W0229 02:35:31.036464  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:31.036474  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:31.036488  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:31.107928  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:31.107987  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:31.125268  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:31.125303  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:31.053142  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:35:31.060477  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0229 02:35:31.062106  369591 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:35:31.062143  369591 api_server.go:131] duration metric: took 4.2933111s to wait for apiserver health ...
	I0229 02:35:31.062154  369591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:35:31.062189  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:31.062278  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:31.119877  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:31.119905  369591 cri.go:89] found id: ""
	I0229 02:35:31.119915  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:31.119981  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.125569  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:31.125648  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:31.193662  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:31.193693  369591 cri.go:89] found id: ""
	I0229 02:35:31.193704  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:31.193762  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.199267  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:31.199365  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:31.251832  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:31.251862  369591 cri.go:89] found id: ""
	I0229 02:35:31.251873  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:31.251935  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.258374  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:31.258477  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:31.309718  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:31.309745  369591 cri.go:89] found id: ""
	I0229 02:35:31.309753  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:31.309804  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.314949  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:31.315025  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:31.367936  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:31.367960  369591 cri.go:89] found id: ""
	I0229 02:35:31.367970  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:31.368038  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.373072  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:31.373137  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:31.420362  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:31.420390  369591 cri.go:89] found id: ""
	I0229 02:35:31.420402  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:31.420470  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.427151  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:31.427221  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:31.482289  369591 cri.go:89] found id: ""
	I0229 02:35:31.482321  369591 logs.go:276] 0 containers: []
	W0229 02:35:31.482333  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:31.482342  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:31.482405  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:31.526713  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:31.526738  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:31.526744  369591 cri.go:89] found id: ""
	I0229 02:35:31.526755  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:31.526807  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.531874  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.536727  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:31.536758  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:31.555901  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:31.555943  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:31.689587  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:31.689629  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:31.737625  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:31.737669  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:31.781015  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:31.781050  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:31.824727  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:31.824757  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:31.866867  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:31.866897  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:31.920324  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:31.920375  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:31.962783  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:31.962815  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:32.003525  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:32.003557  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:32.061377  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:32.061417  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:32.454041  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:32.454097  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:32.498969  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:32.499006  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:30.086688  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:32.087795  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:34.585435  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:35.060469  369591 system_pods.go:59] 8 kube-system pods found
	I0229 02:35:35.060503  369591 system_pods.go:61] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running
	I0229 02:35:35.060509  369591 system_pods.go:61] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running
	I0229 02:35:35.060516  369591 system_pods.go:61] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running
	I0229 02:35:35.060521  369591 system_pods.go:61] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running
	I0229 02:35:35.060525  369591 system_pods.go:61] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running
	I0229 02:35:35.060530  369591 system_pods.go:61] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running
	I0229 02:35:35.060538  369591 system_pods.go:61] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:35:35.060543  369591 system_pods.go:61] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running
	I0229 02:35:35.060553  369591 system_pods.go:74] duration metric: took 3.99838967s to wait for pod list to return data ...
	I0229 02:35:35.060563  369591 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:35:35.063638  369591 default_sa.go:45] found service account: "default"
	I0229 02:35:35.063665  369591 default_sa.go:55] duration metric: took 3.094531ms for default service account to be created ...
	I0229 02:35:35.063676  369591 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:35:35.071344  369591 system_pods.go:86] 8 kube-system pods found
	I0229 02:35:35.071366  369591 system_pods.go:89] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running
	I0229 02:35:35.071371  369591 system_pods.go:89] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running
	I0229 02:35:35.071375  369591 system_pods.go:89] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running
	I0229 02:35:35.071380  369591 system_pods.go:89] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running
	I0229 02:35:35.071385  369591 system_pods.go:89] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running
	I0229 02:35:35.071389  369591 system_pods.go:89] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running
	I0229 02:35:35.071397  369591 system_pods.go:89] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:35:35.071408  369591 system_pods.go:89] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running
	I0229 02:35:35.071420  369591 system_pods.go:126] duration metric: took 7.737446ms to wait for k8s-apps to be running ...
	I0229 02:35:35.071433  369591 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:35:35.071482  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:35.091472  369591 system_svc.go:56] duration metric: took 20.031453ms WaitForService to wait for kubelet.
	I0229 02:35:35.091504  369591 kubeadm.go:581] duration metric: took 4m25.872454283s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:35:35.091523  369591 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:35:35.095487  369591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:35:35.095509  369591 node_conditions.go:123] node cpu capacity is 2
	I0229 02:35:35.095546  369591 node_conditions.go:105] duration metric: took 4.018229ms to run NodePressure ...
	I0229 02:35:35.095567  369591 start.go:228] waiting for startup goroutines ...
	I0229 02:35:35.095580  369591 start.go:233] waiting for cluster config update ...
	I0229 02:35:35.095594  369591 start.go:242] writing updated cluster config ...
	I0229 02:35:35.095888  369591 ssh_runner.go:195] Run: rm -f paused
	I0229 02:35:35.154197  369591 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 02:35:35.156089  369591 out.go:177] * Done! kubectl is now configured to use "no-preload-247751" cluster and "default" namespace by default
	W0229 02:35:31.217691  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:31.217717  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:31.217740  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:31.313847  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:31.313883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:33.861648  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:33.876887  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:33.876954  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:33.921545  370051 cri.go:89] found id: ""
	I0229 02:35:33.921577  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.921588  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:33.921597  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:33.921658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:33.972558  370051 cri.go:89] found id: ""
	I0229 02:35:33.972584  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.972592  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:33.972599  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:33.972662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:34.020821  370051 cri.go:89] found id: ""
	I0229 02:35:34.020852  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.020862  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:34.020873  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:34.020937  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:34.064076  370051 cri.go:89] found id: ""
	I0229 02:35:34.064110  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.064121  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:34.064129  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:34.064191  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:34.108523  370051 cri.go:89] found id: ""
	I0229 02:35:34.108557  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.108568  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:34.108576  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:34.108639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:34.149444  370051 cri.go:89] found id: ""
	I0229 02:35:34.149468  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.149478  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:34.149487  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:34.149562  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:34.193780  370051 cri.go:89] found id: ""
	I0229 02:35:34.193805  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.193814  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:34.193820  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:34.193913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:34.237088  370051 cri.go:89] found id: ""
	I0229 02:35:34.237118  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.237127  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:34.237137  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:34.237151  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:34.281055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:34.281091  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:34.333886  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:34.333925  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:34.353163  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:34.353204  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:34.465925  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:34.465951  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:34.465969  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:36.587119  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:39.086456  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:37.049957  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:37.064297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:37.064384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:37.105669  370051 cri.go:89] found id: ""
	I0229 02:35:37.105703  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.105711  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:37.105720  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:37.105790  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:37.143753  370051 cri.go:89] found id: ""
	I0229 02:35:37.143788  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.143799  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:37.143808  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:37.143880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:37.180126  370051 cri.go:89] found id: ""
	I0229 02:35:37.180157  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.180166  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:37.180173  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:37.180227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:37.221135  370051 cri.go:89] found id: ""
	I0229 02:35:37.221173  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.221185  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:37.221193  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:37.221261  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:37.258888  370051 cri.go:89] found id: ""
	I0229 02:35:37.258920  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.258932  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:37.258940  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:37.259005  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:37.300970  370051 cri.go:89] found id: ""
	I0229 02:35:37.300998  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.301010  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:37.301018  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:37.301105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:37.349797  370051 cri.go:89] found id: ""
	I0229 02:35:37.349829  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.349841  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:37.349850  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:37.349916  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:37.408726  370051 cri.go:89] found id: ""
	I0229 02:35:37.408762  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.408773  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:37.408787  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:37.408805  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:37.462030  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:37.462064  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:37.477836  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:37.477868  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:37.553886  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:37.553924  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:37.553941  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:37.644637  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:37.644683  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:40.197937  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:40.212830  370051 kubeadm.go:640] restartCluster took 4m14.648338345s
	W0229 02:35:40.212984  370051 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 02:35:40.213021  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:35:40.673169  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:40.690108  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:35:40.702424  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:35:40.713782  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:35:40.713832  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:35:40.775345  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:35:40.775527  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:35:40.929045  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:35:40.929185  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:35:40.929310  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:35:41.154311  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:35:41.154449  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:35:41.162905  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:35:41.317651  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:35:41.319260  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:35:41.319358  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:35:41.319458  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:35:41.319564  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:35:41.319675  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:35:41.319772  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:35:41.319857  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:35:41.319963  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:35:41.320066  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:35:41.320166  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:35:41.320289  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:35:41.320357  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:35:41.320439  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:35:41.457291  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:35:41.599703  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:35:41.766344  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:35:41.939397  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:35:41.940740  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:35:41.090698  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:43.585822  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:41.942544  370051 out.go:204]   - Booting up control plane ...
	I0229 02:35:41.942656  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:35:41.946949  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:35:41.949540  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:35:41.950426  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:35:41.953310  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:35:45.586855  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:48.085961  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:50.585602  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:52.587992  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:55.085046  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:57.086710  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:59.590441  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:57.264698  369869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.262039409s)
	I0229 02:35:57.264826  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:57.285615  369869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:35:57.297607  369869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:35:57.309412  369869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:35:57.309471  369869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:35:57.540175  369869 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:36:02.086317  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:04.587625  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:06.714158  369869 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:36:06.714249  369869 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:36:06.714325  369869 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:36:06.714490  369869 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:36:06.714633  369869 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:36:06.714742  369869 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:36:06.716059  369869 out.go:204]   - Generating certificates and keys ...
	I0229 02:36:06.716160  369869 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:36:06.716250  369869 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:36:06.716357  369869 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:36:06.716434  369869 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:36:06.716508  369869 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:36:06.716572  369869 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:36:06.716649  369869 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:36:06.716722  369869 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:36:06.716824  369869 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:36:06.716952  369869 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:36:06.717008  369869 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:36:06.717080  369869 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:36:06.717147  369869 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:36:06.717221  369869 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:36:06.717298  369869 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:36:06.717367  369869 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:36:06.717474  369869 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:36:06.717559  369869 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:36:06.718770  369869 out.go:204]   - Booting up control plane ...
	I0229 02:36:06.718866  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:36:06.718983  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:36:06.719074  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:36:06.719230  369869 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:36:06.719364  369869 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:36:06.719431  369869 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:36:06.719628  369869 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:36:06.719749  369869 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503520 seconds
	I0229 02:36:06.719906  369869 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:36:06.720060  369869 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:36:06.720126  369869 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:36:06.720344  369869 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-071485 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:36:06.720433  369869 kubeadm.go:322] [bootstrap-token] Using token: oueq3v.8ghuyl6sece1tffl
	I0229 02:36:06.721973  369869 out.go:204]   - Configuring RBAC rules ...
	I0229 02:36:06.722107  369869 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:36:06.722252  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:36:06.722444  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:36:06.722643  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:36:06.722793  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:36:06.722937  369869 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:36:06.723081  369869 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:36:06.723119  369869 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:36:06.723188  369869 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:36:06.723198  369869 kubeadm.go:322] 
	I0229 02:36:06.723285  369869 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:36:06.723310  369869 kubeadm.go:322] 
	I0229 02:36:06.723426  369869 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:36:06.723436  369869 kubeadm.go:322] 
	I0229 02:36:06.723467  369869 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:36:06.723556  369869 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:36:06.723637  369869 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:36:06.723646  369869 kubeadm.go:322] 
	I0229 02:36:06.723713  369869 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:36:06.723722  369869 kubeadm.go:322] 
	I0229 02:36:06.723799  369869 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:36:06.723809  369869 kubeadm.go:322] 
	I0229 02:36:06.723869  369869 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:36:06.723979  369869 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:36:06.724073  369869 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:36:06.724083  369869 kubeadm.go:322] 
	I0229 02:36:06.724178  369869 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:36:06.724269  369869 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:36:06.724279  369869 kubeadm.go:322] 
	I0229 02:36:06.724389  369869 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token oueq3v.8ghuyl6sece1tffl \
	I0229 02:36:06.724520  369869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 \
	I0229 02:36:06.724552  369869 kubeadm.go:322] 	--control-plane 
	I0229 02:36:06.724560  369869 kubeadm.go:322] 
	I0229 02:36:06.724665  369869 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:36:06.724675  369869 kubeadm.go:322] 
	I0229 02:36:06.724767  369869 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token oueq3v.8ghuyl6sece1tffl \
	I0229 02:36:06.724923  369869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 
	I0229 02:36:06.724941  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:36:06.724952  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:36:06.726566  369869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:36:07.088398  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:09.587442  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:06.727880  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:36:06.786343  369869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:36:06.842349  369869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:36:06.842420  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=default-k8s-diff-port-071485 minikube.k8s.io/updated_at=2024_02_29T02_36_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:06.842428  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:07.196763  369869 ops.go:34] apiserver oom_adj: -16
	I0229 02:36:07.196958  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:07.696991  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:08.197336  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:08.697155  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:09.197955  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:09.697107  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:10.197816  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.085528  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:14.085852  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:10.697486  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:11.197744  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:11.697179  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.197614  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.697015  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:13.197983  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:13.697315  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:14.196982  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:14.698012  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:15.197896  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:15.697895  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:16.197062  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:16.697819  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:17.197222  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:17.697031  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.197683  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.697094  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.870924  369869 kubeadm.go:1088] duration metric: took 12.028572011s to wait for elevateKubeSystemPrivileges.
	I0229 02:36:18.870961  369869 kubeadm.go:406] StartCluster complete in 5m19.353203226s
	I0229 02:36:18.870986  369869 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:36:18.871077  369869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:36:18.873654  369869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:36:18.873954  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:36:18.874041  369869 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:36:18.874118  369869 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874130  369869 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874142  369869 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.874149  369869 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:36:18.874152  369869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-071485"
	I0229 02:36:18.874201  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.874256  369869 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:36:18.874341  369869 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874359  369869 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.874367  369869 addons.go:243] addon metrics-server should already be in state true
	I0229 02:36:18.874422  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.874613  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874637  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.874613  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874691  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.874811  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874846  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.892207  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0229 02:36:18.892260  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0229 02:36:18.892967  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.892986  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.893508  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.893528  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.893680  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.893700  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.893936  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.894102  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.894143  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0229 02:36:18.894331  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.894582  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.894594  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.894613  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.895109  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.895143  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.895508  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.896106  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.896142  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.898127  369869 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.898143  369869 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:36:18.898168  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.898482  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.898516  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.917303  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37069
	I0229 02:36:18.917472  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I0229 02:36:18.917747  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.917894  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.918493  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.918510  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.918654  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.918665  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.919012  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.919077  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.919229  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.919754  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.921030  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.922677  369869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:36:18.921622  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.923872  369869 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:36:18.923899  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:36:18.923919  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.925237  369869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:36:18.926153  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:36:18.924603  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0229 02:36:18.926269  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:36:18.926303  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.927739  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.928184  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.928277  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.928299  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.930032  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.930057  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.930386  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.930456  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.930614  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.930723  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.930914  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.931014  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:18.931133  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.931185  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.931533  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.931553  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.931576  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.931737  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.932033  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.932190  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:18.948311  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0229 02:36:18.949328  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.949793  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.949819  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.950313  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.950529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.952381  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.952660  369869 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:36:18.952673  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:36:18.952689  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.956332  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.956779  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.956808  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.957117  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.957313  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.957425  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.957485  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:19.128114  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:36:19.141619  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:36:19.141649  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:36:19.169945  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:36:19.187099  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:36:19.187124  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:36:19.211358  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:36:19.289856  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:36:19.289880  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:36:19.398720  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:36:19.414512  369869 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-071485" context rescaled to 1 replicas
	I0229 02:36:19.414562  369869 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.233 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:36:19.416389  369869 out.go:177] * Verifying Kubernetes components...
	I0229 02:36:15.586606  369508 pod_ready.go:81] duration metric: took 4m0.008250092s waiting for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	E0229 02:36:15.586638  369508 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:36:15.586648  369508 pod_ready.go:38] duration metric: took 4m5.573018241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:36:15.586669  369508 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:36:15.586707  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:15.586771  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:15.644937  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:15.644969  369508 cri.go:89] found id: ""
	I0229 02:36:15.644980  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:15.645054  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.653058  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:15.653137  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:15.709225  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:15.709254  369508 cri.go:89] found id: ""
	I0229 02:36:15.709264  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:15.709333  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.715304  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:15.715391  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:15.769593  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:15.769627  369508 cri.go:89] found id: ""
	I0229 02:36:15.769637  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:15.769702  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.775157  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:15.775230  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:15.820002  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:15.820030  369508 cri.go:89] found id: ""
	I0229 02:36:15.820040  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:15.820105  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.827058  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:15.827122  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:15.875030  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:15.875063  369508 cri.go:89] found id: ""
	I0229 02:36:15.875074  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:15.875142  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.880489  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:15.880555  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:15.929452  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:15.929476  369508 cri.go:89] found id: ""
	I0229 02:36:15.929484  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:15.929545  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.934321  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:15.934396  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:15.981960  369508 cri.go:89] found id: ""
	I0229 02:36:15.981997  369508 logs.go:276] 0 containers: []
	W0229 02:36:15.982006  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:15.982014  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:15.982077  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:16.034169  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:16.034196  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:16.034201  369508 cri.go:89] found id: ""
	I0229 02:36:16.034210  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:16.034281  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:16.039463  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:16.044719  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:16.044748  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:16.111048  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:16.111084  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:16.278784  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:16.278832  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:16.333048  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:16.333085  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:16.376514  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:16.376555  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:16.420840  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:16.420944  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:16.468273  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:16.468308  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:16.526001  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:16.526043  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:16.569084  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:16.569120  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:16.609818  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:16.609847  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:16.660979  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:16.661019  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:16.677397  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:16.677432  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:16.732421  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:16.732464  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:19.417788  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:36:21.277741  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.107753576s)
	I0229 02:36:21.277802  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.277815  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.277840  369869 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.066425449s)
	I0229 02:36:21.277873  369869 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 02:36:21.277840  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.149690589s)
	I0229 02:36:21.277908  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.277918  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278277  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.278323  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278331  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.278339  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.278351  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278445  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278458  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.278465  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.278474  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278519  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.278592  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278603  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.280452  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.280470  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.280482  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.300880  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.300907  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.301193  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.301217  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.572633  369869 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.154816183s)
	I0229 02:36:21.572676  369869 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-071485" to be "Ready" ...
	I0229 02:36:21.572635  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.173852857s)
	I0229 02:36:21.572814  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.572842  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.573153  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.573207  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.573215  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.573228  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.573236  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.573538  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.573575  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.573587  369869 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-071485"
	I0229 02:36:21.575111  369869 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:36:19.738493  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:36:19.758171  369508 api_server.go:72] duration metric: took 4m17.008228834s to wait for apiserver process to appear ...
	I0229 02:36:19.758199  369508 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:36:19.758281  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:19.758349  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:19.811042  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:19.811071  369508 cri.go:89] found id: ""
	I0229 02:36:19.811082  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:19.811145  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.817952  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:19.818034  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:19.871006  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:19.871033  369508 cri.go:89] found id: ""
	I0229 02:36:19.871043  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:19.871109  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.877440  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:19.877512  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:19.928043  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:19.928071  369508 cri.go:89] found id: ""
	I0229 02:36:19.928081  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:19.928142  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.935299  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:19.935363  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:19.977360  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:19.977391  369508 cri.go:89] found id: ""
	I0229 02:36:19.977402  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:19.977482  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.982361  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:19.982442  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:20.025903  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:20.025931  369508 cri.go:89] found id: ""
	I0229 02:36:20.025941  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:20.026012  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.031390  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:20.031477  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:20.080768  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:20.080792  369508 cri.go:89] found id: ""
	I0229 02:36:20.080800  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:20.080864  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.087322  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:20.087388  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:20.139067  369508 cri.go:89] found id: ""
	I0229 02:36:20.139111  369508 logs.go:276] 0 containers: []
	W0229 02:36:20.139124  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:20.139132  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:20.139195  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:20.193052  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:20.193085  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:20.193091  369508 cri.go:89] found id: ""
	I0229 02:36:20.193101  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:20.193174  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.199740  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.205385  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:20.205414  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:20.360843  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:20.360894  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:20.411077  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:20.411113  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:20.459855  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:20.459910  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:20.517056  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:20.517101  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:20.568151  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:20.568185  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:20.637131  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:20.637165  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:21.144933  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:21.144980  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:21.206565  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:21.206607  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:21.257071  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:21.257118  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:21.315541  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:21.315589  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:21.358630  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:21.358665  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:21.398170  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:21.398201  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:23.914059  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:36:23.923854  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0229 02:36:23.926443  369508 api_server.go:141] control plane version: v1.28.4
	I0229 02:36:23.926466  369508 api_server.go:131] duration metric: took 4.168260413s to wait for apiserver health ...
	I0229 02:36:23.926475  369508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:36:23.926506  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:23.926566  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:24.013825  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:24.013849  369508 cri.go:89] found id: ""
	I0229 02:36:24.013857  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:24.013913  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.019432  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:24.019506  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:24.078857  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:24.078877  369508 cri.go:89] found id: ""
	I0229 02:36:24.078885  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:24.078945  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.083761  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:24.083822  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:24.133681  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:24.133707  369508 cri.go:89] found id: ""
	I0229 02:36:24.133717  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:24.133779  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.139165  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:24.139228  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:24.185863  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:24.185883  369508 cri.go:89] found id: ""
	I0229 02:36:24.185892  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:24.185939  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.191094  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:24.191164  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:24.232922  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:24.232953  369508 cri.go:89] found id: ""
	I0229 02:36:24.232963  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:24.233031  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.238154  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:24.238252  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:24.280735  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:24.280760  369508 cri.go:89] found id: ""
	I0229 02:36:24.280769  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:24.280842  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.285497  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:24.285558  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:24.324979  369508 cri.go:89] found id: ""
	I0229 02:36:24.325007  369508 logs.go:276] 0 containers: []
	W0229 02:36:24.325016  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:24.325022  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:24.325085  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:24.370875  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:24.370908  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:24.370912  369508 cri.go:89] found id: ""
	I0229 02:36:24.370919  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:24.370973  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.378247  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.382856  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:24.382899  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:24.430889  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:24.430919  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:24.470370  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:24.470407  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:21.576300  369869 addons.go:505] enable addons completed in 2.702258052s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:36:21.582468  369869 node_ready.go:49] node "default-k8s-diff-port-071485" has status "Ready":"True"
	I0229 02:36:21.582494  369869 node_ready.go:38] duration metric: took 9.804213ms waiting for node "default-k8s-diff-port-071485" to be "Ready" ...
	I0229 02:36:21.582506  369869 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:36:21.608694  369869 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.125662  369869 pod_ready.go:92] pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.125695  369869 pod_ready.go:81] duration metric: took 1.51697387s waiting for pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.125707  369869 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.141831  369869 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.141855  369869 pod_ready.go:81] duration metric: took 16.140002ms waiting for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.141864  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.154216  369869 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.154261  369869 pod_ready.go:81] duration metric: took 12.389751ms waiting for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.154276  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.166057  369869 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.166085  369869 pod_ready.go:81] duration metric: took 11.798242ms waiting for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.166098  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gr44w" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.179414  369869 pod_ready.go:92] pod "kube-proxy-gr44w" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.179437  369869 pod_ready.go:81] duration metric: took 13.331411ms waiting for pod "kube-proxy-gr44w" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.179447  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.576569  369869 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.576597  369869 pod_ready.go:81] duration metric: took 397.142516ms waiting for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.576611  369869 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:21.953781  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:36:21.954431  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:21.954685  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:24.880947  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:24.880985  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:24.939045  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:24.939079  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:24.987109  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:24.987144  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:25.049095  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:25.049131  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:25.091654  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:25.091686  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:25.153281  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:25.153326  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:25.169544  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:25.169575  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:25.294469  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:25.294504  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:25.346867  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:25.346900  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:25.388876  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:25.388921  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:27.937848  369508 system_pods.go:59] 8 kube-system pods found
	I0229 02:36:27.937878  369508 system_pods.go:61] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running
	I0229 02:36:27.937883  369508 system_pods.go:61] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running
	I0229 02:36:27.937888  369508 system_pods.go:61] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running
	I0229 02:36:27.937891  369508 system_pods.go:61] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running
	I0229 02:36:27.937894  369508 system_pods.go:61] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:36:27.937898  369508 system_pods.go:61] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running
	I0229 02:36:27.937903  369508 system_pods.go:61] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:36:27.937908  369508 system_pods.go:61] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:36:27.937922  369508 system_pods.go:74] duration metric: took 4.011440564s to wait for pod list to return data ...
	I0229 02:36:27.937933  369508 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:36:27.940602  369508 default_sa.go:45] found service account: "default"
	I0229 02:36:27.940623  369508 default_sa.go:55] duration metric: took 2.681589ms for default service account to be created ...
	I0229 02:36:27.940632  369508 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:36:27.947433  369508 system_pods.go:86] 8 kube-system pods found
	I0229 02:36:27.947455  369508 system_pods.go:89] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running
	I0229 02:36:27.947466  369508 system_pods.go:89] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running
	I0229 02:36:27.947472  369508 system_pods.go:89] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running
	I0229 02:36:27.947482  369508 system_pods.go:89] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running
	I0229 02:36:27.947491  369508 system_pods.go:89] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:36:27.947497  369508 system_pods.go:89] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running
	I0229 02:36:27.947508  369508 system_pods.go:89] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:36:27.947518  369508 system_pods.go:89] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:36:27.947531  369508 system_pods.go:126] duration metric: took 6.892538ms to wait for k8s-apps to be running ...
	I0229 02:36:27.947539  369508 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:36:27.947591  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:36:27.965730  369508 system_svc.go:56] duration metric: took 18.181663ms WaitForService to wait for kubelet.
	I0229 02:36:27.965756  369508 kubeadm.go:581] duration metric: took 4m25.215820473s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:36:27.965780  369508 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:36:27.970094  369508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:36:27.970123  369508 node_conditions.go:123] node cpu capacity is 2
	I0229 02:36:27.970138  369508 node_conditions.go:105] duration metric: took 4.347423ms to run NodePressure ...
	I0229 02:36:27.970152  369508 start.go:228] waiting for startup goroutines ...
	I0229 02:36:27.970162  369508 start.go:233] waiting for cluster config update ...
	I0229 02:36:27.970175  369508 start.go:242] writing updated cluster config ...
	I0229 02:36:27.970529  369508 ssh_runner.go:195] Run: rm -f paused
	I0229 02:36:28.020686  369508 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:36:28.022730  369508 out.go:177] * Done! kubectl is now configured to use "embed-certs-915633" cluster and "default" namespace by default
	I0229 02:36:25.585985  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:28.085278  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:26.954801  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:26.955093  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:30.583462  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:32.584198  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:34.585129  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:37.085551  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:39.584450  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:36.955344  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:36.955543  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:41.585000  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:44.083919  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:46.085694  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:48.583474  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:50.584026  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:53.084622  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:55.084729  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:57.084941  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:59.586329  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:56.957911  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:56.958178  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:02.085189  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:04.085672  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:06.586906  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:09.085130  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:11.583811  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:13.585179  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:16.083670  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:18.084884  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:20.584395  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:22.585487  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:24.586088  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:26.586608  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:29.084644  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:31.585292  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:34.083690  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:36.959509  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:37:36.959795  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:36.959812  370051 kubeadm.go:322] 
	I0229 02:37:36.959848  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:37:36.959887  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:37:36.959893  370051 kubeadm.go:322] 
	I0229 02:37:36.959937  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:37:36.959991  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:37:36.960142  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:37:36.960167  370051 kubeadm.go:322] 
	I0229 02:37:36.960282  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:37:36.960318  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:37:36.960362  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:37:36.960371  370051 kubeadm.go:322] 
	I0229 02:37:36.960482  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:37:36.960617  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:37:36.960756  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:37:36.960839  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:37:36.960951  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:37:36.961015  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:37:36.961366  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:37:36.961507  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:37:36.961616  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 02:37:36.961763  370051 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:37:36.961835  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:37:37.427665  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:37:37.443045  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:37:37.456937  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:37:37.456979  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:37:37.529093  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:37:37.529246  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:37:37.670260  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:37:37.670417  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:37:37.670548  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:37:37.904220  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:37:37.905569  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:37:37.914919  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:37:38.070911  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:37:38.072738  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:37:38.072860  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:37:38.072951  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:37:38.073049  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:37:38.073132  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:37:38.073230  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:37:38.073299  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:37:38.073376  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:37:38.073458  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:37:38.073566  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:37:38.073680  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:37:38.073720  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:37:38.073794  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:37:38.209805  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:37:38.305550  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:37:38.464715  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:37:38.623139  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:37:38.624364  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:37:36.084556  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:38.086561  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:38.625883  370051 out.go:204]   - Booting up control plane ...
	I0229 02:37:38.626039  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:37:38.630668  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:37:38.631740  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:37:38.632687  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:37:38.636043  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:37:40.583589  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:42.583968  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:44.584409  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:46.586413  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:49.084223  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:51.584770  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:53.584871  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:55.585299  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:58.084753  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:00.584432  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:03.085511  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:05.585519  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:08.085774  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:10.087984  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:12.584744  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:15.085757  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:17.584807  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:19.588130  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:18.637746  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:38:18.638616  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:18.638883  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:22.084442  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:24.085227  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:23.639374  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:23.639613  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:26.087774  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:28.584872  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:30.587375  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:33.085060  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:35.086106  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:33.640169  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:33.640468  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:37.584670  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:40.085797  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:42.585365  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:44.587079  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:46.590638  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:49.086500  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:51.584286  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:53.587405  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:53.640871  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:53.641147  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:56.084551  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:58.085668  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:00.086247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:02.588854  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:05.085163  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:07.090885  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:09.583687  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:11.585184  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:14.085800  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:16.086643  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:18.584073  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:21.084992  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:23.585496  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:25.586111  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:28.086464  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:33.642813  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:39:33.643083  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:39:33.643099  370051 kubeadm.go:322] 
	I0229 02:39:33.643153  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:39:33.643206  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:39:33.643213  370051 kubeadm.go:322] 
	I0229 02:39:33.643252  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:39:33.643296  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:39:33.643443  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:39:33.643455  370051 kubeadm.go:322] 
	I0229 02:39:33.643605  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:39:33.643655  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:39:33.643700  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:39:33.643714  370051 kubeadm.go:322] 
	I0229 02:39:33.643871  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:39:33.644040  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:39:33.644193  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:39:33.644272  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:39:33.644371  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:39:33.644412  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:39:33.644855  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:39:33.644972  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:39:33.645065  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:39:33.645132  370051 kubeadm.go:406] StartCluster complete in 8m8.138449101s
	I0229 02:39:33.645178  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:39:33.645255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:39:33.699121  370051 cri.go:89] found id: ""
	I0229 02:39:33.699154  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.699166  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:39:33.699174  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:39:33.699240  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:39:33.747229  370051 cri.go:89] found id: ""
	I0229 02:39:33.747260  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.747272  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:39:33.747279  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:39:33.747349  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:39:33.789303  370051 cri.go:89] found id: ""
	I0229 02:39:33.789334  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.789343  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:39:33.789350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:39:33.789413  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:39:33.832769  370051 cri.go:89] found id: ""
	I0229 02:39:33.832801  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.832814  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:39:33.832824  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:39:33.832891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:39:33.881508  370051 cri.go:89] found id: ""
	I0229 02:39:33.881543  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.881554  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:39:33.881571  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:39:33.881635  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:39:33.941691  370051 cri.go:89] found id: ""
	I0229 02:39:33.941728  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.941740  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:39:33.941749  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:39:33.941822  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:39:33.990639  370051 cri.go:89] found id: ""
	I0229 02:39:33.990681  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.990704  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:39:33.990713  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:39:33.990774  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:39:34.038426  370051 cri.go:89] found id: ""
	I0229 02:39:34.038460  370051 logs.go:276] 0 containers: []
	W0229 02:39:34.038470  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:39:34.038480  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:39:34.038497  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:39:34.054571  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:39:34.054604  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:39:34.131297  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:39:34.131323  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:39:34.131337  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:39:34.232302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:39:34.232349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:39:34.283314  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:39:34.283351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:39:34.336858  370051 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:39:34.336920  370051 out.go:239] * 
	W0229 02:39:34.336985  370051 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.337006  370051 out.go:239] * 
	W0229 02:39:34.337787  370051 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:39:34.340744  370051 out.go:177] 
	W0229 02:39:34.342096  370051 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.342137  370051 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:39:34.342160  370051 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:39:34.343540  370051 out.go:177] 
	I0229 02:39:30.584963  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:32.585599  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:34.588073  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:37.085513  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:39.584721  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:41.585072  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:44.086996  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:46.587437  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:49.083819  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:51.084472  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:53.085522  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:55.585518  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:58.084454  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:00.085075  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:02.588500  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:05.083707  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:07.084423  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:09.584552  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:11.590611  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:14.084618  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:16.597479  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:19.086312  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:21.586450  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:23.583798  369869 pod_ready.go:81] duration metric: took 4m0.007166298s waiting for pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace to be "Ready" ...
	E0229 02:40:23.583824  369869 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:40:23.583834  369869 pod_ready.go:38] duration metric: took 4m2.001316522s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:40:23.583860  369869 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:40:23.583899  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:23.584002  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:23.655958  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:23.655987  369869 cri.go:89] found id: ""
	I0229 02:40:23.655997  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:23.656057  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.661125  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:23.661199  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:23.712373  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:23.712400  369869 cri.go:89] found id: ""
	I0229 02:40:23.712410  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:23.712508  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.718149  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:23.718209  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:23.775835  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:23.775858  369869 cri.go:89] found id: ""
	I0229 02:40:23.775867  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:23.775923  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.780698  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:23.780792  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:23.825914  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:23.825939  369869 cri.go:89] found id: ""
	I0229 02:40:23.825949  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:23.826017  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.830870  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:23.830932  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:23.868737  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:23.868767  369869 cri.go:89] found id: ""
	I0229 02:40:23.868777  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:23.868841  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.873522  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:23.873598  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:23.918640  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:23.918663  369869 cri.go:89] found id: ""
	I0229 02:40:23.918671  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:23.918725  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.923456  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:23.923517  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:23.963045  369869 cri.go:89] found id: ""
	I0229 02:40:23.963071  369869 logs.go:276] 0 containers: []
	W0229 02:40:23.963080  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:23.963085  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:23.963136  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:24.006380  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:24.006402  369869 cri.go:89] found id: ""
	I0229 02:40:24.006409  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:24.006459  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:24.012228  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:24.012269  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:24.095110  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:24.095354  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:24.117199  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:24.117229  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:24.181064  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:24.181126  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:24.239267  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:24.239305  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:24.283248  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:24.283281  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:24.746786  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:24.746831  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:24.764451  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:24.764487  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:24.917582  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:24.917625  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:24.980095  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:24.980142  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:25.028219  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:25.028253  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:25.083840  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:25.083874  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:25.131148  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:25.131179  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:25.179314  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:25.179340  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:25.179415  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:25.179432  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:25.179455  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:25.179471  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:25.179479  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:35.181209  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:40:35.199982  369869 api_server.go:72] duration metric: took 4m15.785374734s to wait for apiserver process to appear ...
	I0229 02:40:35.200012  369869 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:40:35.200052  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:35.200109  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:35.241760  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:35.241786  369869 cri.go:89] found id: ""
	I0229 02:40:35.241795  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:35.241846  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.247188  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:35.247294  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:35.293992  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:35.294022  369869 cri.go:89] found id: ""
	I0229 02:40:35.294033  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:35.294098  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.298900  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:35.298971  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:35.340809  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:35.340835  369869 cri.go:89] found id: ""
	I0229 02:40:35.340843  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:35.340903  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.345913  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:35.345988  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:35.392027  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:35.392061  369869 cri.go:89] found id: ""
	I0229 02:40:35.392072  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:35.392140  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.397043  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:35.397120  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:35.452900  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:35.452931  369869 cri.go:89] found id: ""
	I0229 02:40:35.452942  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:35.453014  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.459221  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:35.459303  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:35.503530  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:35.503555  369869 cri.go:89] found id: ""
	I0229 02:40:35.503563  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:35.503615  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.509021  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:35.509083  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:35.553777  369869 cri.go:89] found id: ""
	I0229 02:40:35.553803  369869 logs.go:276] 0 containers: []
	W0229 02:40:35.553812  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:35.553818  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:35.553868  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:35.605234  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:35.605259  369869 cri.go:89] found id: ""
	I0229 02:40:35.605267  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:35.605333  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.610433  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:35.610465  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:36.030757  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:36.030807  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:36.047193  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:36.047224  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:36.105936  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:36.105983  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:36.169028  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:36.169080  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:36.241640  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:36.241678  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:36.284787  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:36.284822  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:36.333264  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:36.333293  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:36.385436  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:36.385468  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:36.463289  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:36.463491  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:36.485748  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:36.485782  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:36.604181  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:36.604218  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:36.659210  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:36.659247  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:36.704612  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:36.704640  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:36.704695  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:36.704706  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:36.704712  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:36.704719  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:36.704726  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:46.705868  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:40:46.711301  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 200:
	ok
	I0229 02:40:46.713000  369869 api_server.go:141] control plane version: v1.28.4
	I0229 02:40:46.713025  369869 api_server.go:131] duration metric: took 11.513005073s to wait for apiserver health ...
	I0229 02:40:46.713034  369869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:40:46.713061  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:46.713121  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:46.759486  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:46.759505  369869 cri.go:89] found id: ""
	I0229 02:40:46.759517  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:46.759581  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.764215  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:46.764299  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:46.805016  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:46.805042  369869 cri.go:89] found id: ""
	I0229 02:40:46.805049  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:46.805113  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.810213  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:46.810284  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:46.862825  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:46.862855  369869 cri.go:89] found id: ""
	I0229 02:40:46.862867  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:46.862923  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.867531  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:46.867588  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:46.914211  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:46.914247  369869 cri.go:89] found id: ""
	I0229 02:40:46.914258  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:46.914327  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.918857  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:46.918921  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:46.959981  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:46.960016  369869 cri.go:89] found id: ""
	I0229 02:40:46.960027  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:46.960095  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.964789  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:46.964846  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:47.009289  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:47.009313  369869 cri.go:89] found id: ""
	I0229 02:40:47.009322  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:47.009390  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:47.015339  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:47.015413  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:47.059195  369869 cri.go:89] found id: ""
	I0229 02:40:47.059227  369869 logs.go:276] 0 containers: []
	W0229 02:40:47.059239  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:47.059254  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:47.059306  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:47.103293  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:47.103323  369869 cri.go:89] found id: ""
	I0229 02:40:47.103334  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:47.103401  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:47.108048  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:47.108076  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:47.157407  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:47.157441  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:47.591202  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:47.591261  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:47.644877  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:47.644910  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:47.784217  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:47.784249  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:47.839113  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:47.839144  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:47.885581  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:47.885616  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:47.930971  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:47.931009  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:47.986352  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:47.986437  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:48.067103  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:48.067316  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:48.088373  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:48.088407  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:48.105750  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:48.105781  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:48.154640  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:48.154677  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:48.196009  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:48.196042  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:48.196112  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:48.196128  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:48.196137  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:48.196146  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:48.196155  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:58.203822  369869 system_pods.go:59] 8 kube-system pods found
	I0229 02:40:58.203853  369869 system_pods.go:61] "coredns-5dd5756b68-xj4sh" [e2741c05-81b2-4de6-8329-f88912d48160] Running
	I0229 02:40:58.203859  369869 system_pods.go:61] "etcd-default-k8s-diff-port-071485" [88b0e865-c53e-4829-a56a-2a3b6e405df4] Running
	I0229 02:40:58.203866  369869 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071485" [445fa1c9-589b-437d-92ca-0d15ee8228ae] Running
	I0229 02:40:58.203872  369869 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071485" [e3f60cdb-6214-4987-b692-a4921ece3895] Running
	I0229 02:40:58.203877  369869 system_pods.go:61] "kube-proxy-gr44w" [a74b553f-683a-4e1b-ac48-b4553d00b306] Running
	I0229 02:40:58.203881  369869 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071485" [4c1afe85-10be-45e5-8b99-6bd3cf12a828] Running
	I0229 02:40:58.203888  369869 system_pods.go:61] "metrics-server-57f55c9bc5-fpwzl" [5215d27e-4bf2-4331-89f2-24096dc96b90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:40:58.203893  369869 system_pods.go:61] "storage-provisioner" [d7b70f8e-1689-4526-a39f-eb8005cbecd2] Running
	I0229 02:40:58.203902  369869 system_pods.go:74] duration metric: took 11.49086169s to wait for pod list to return data ...
	I0229 02:40:58.203913  369869 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:40:58.207120  369869 default_sa.go:45] found service account: "default"
	I0229 02:40:58.207145  369869 default_sa.go:55] duration metric: took 3.22533ms for default service account to be created ...
	I0229 02:40:58.207154  369869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:40:58.213026  369869 system_pods.go:86] 8 kube-system pods found
	I0229 02:40:58.213056  369869 system_pods.go:89] "coredns-5dd5756b68-xj4sh" [e2741c05-81b2-4de6-8329-f88912d48160] Running
	I0229 02:40:58.213065  369869 system_pods.go:89] "etcd-default-k8s-diff-port-071485" [88b0e865-c53e-4829-a56a-2a3b6e405df4] Running
	I0229 02:40:58.213073  369869 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071485" [445fa1c9-589b-437d-92ca-0d15ee8228ae] Running
	I0229 02:40:58.213081  369869 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071485" [e3f60cdb-6214-4987-b692-a4921ece3895] Running
	I0229 02:40:58.213088  369869 system_pods.go:89] "kube-proxy-gr44w" [a74b553f-683a-4e1b-ac48-b4553d00b306] Running
	I0229 02:40:58.213094  369869 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071485" [4c1afe85-10be-45e5-8b99-6bd3cf12a828] Running
	I0229 02:40:58.213107  369869 system_pods.go:89] "metrics-server-57f55c9bc5-fpwzl" [5215d27e-4bf2-4331-89f2-24096dc96b90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:40:58.213117  369869 system_pods.go:89] "storage-provisioner" [d7b70f8e-1689-4526-a39f-eb8005cbecd2] Running
	I0229 02:40:58.213130  369869 system_pods.go:126] duration metric: took 5.970128ms to wait for k8s-apps to be running ...
	I0229 02:40:58.213142  369869 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:40:58.213204  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:40:58.230150  369869 system_svc.go:56] duration metric: took 16.998299ms WaitForService to wait for kubelet.
	I0229 02:40:58.230178  369869 kubeadm.go:581] duration metric: took 4m38.815578079s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:40:58.230245  369869 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:40:58.233660  369869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:40:58.233719  369869 node_conditions.go:123] node cpu capacity is 2
	I0229 02:40:58.233737  369869 node_conditions.go:105] duration metric: took 3.486117ms to run NodePressure ...
	I0229 02:40:58.233756  369869 start.go:228] waiting for startup goroutines ...
	I0229 02:40:58.233766  369869 start.go:233] waiting for cluster config update ...
	I0229 02:40:58.233777  369869 start.go:242] writing updated cluster config ...
	I0229 02:40:58.234079  369869 ssh_runner.go:195] Run: rm -f paused
	I0229 02:40:58.285415  369869 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:40:58.287433  369869 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-071485" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.843591434Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174919843564189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3edd295-50ae-460a-a9d9-da80415b505b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.844108076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbd415f1-1436-4d24-852d-4391a762c221 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.844249732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbd415f1-1436-4d24-852d-4391a762c221 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.844287809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bbd415f1-1436-4d24-852d-4391a762c221 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.884148728Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1eba37e0-1bdf-433e-8451-ce62dccae9d7 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.884312351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1eba37e0-1bdf-433e-8451-ce62dccae9d7 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.885362249Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb00a454-590a-4aec-89eb-d5d5c167b61c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.885771321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174919885746815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb00a454-590a-4aec-89eb-d5d5c167b61c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.886538687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=827a791e-0403-4cd2-b7b6-07ff1198e6ff name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.886616182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=827a791e-0403-4cd2-b7b6-07ff1198e6ff name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.886650441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=827a791e-0403-4cd2-b7b6-07ff1198e6ff name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.944041361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e191466-31da-4cb3-871d-e83400aff515 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.944142762Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e191466-31da-4cb3-871d-e83400aff515 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.945601211Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c5bef55-8f27-4611-95e2-44f40ee457e4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.946220937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174919946145778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c5bef55-8f27-4611-95e2-44f40ee457e4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.946931693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ecc05e3-32dd-4bef-b614-bbf26a0e14cc name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.947032204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ecc05e3-32dd-4bef-b614-bbf26a0e14cc name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.947090313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8ecc05e3-32dd-4bef-b614-bbf26a0e14cc name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.994583464Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a0f565c-2073-4f12-adbe-f8ca0381146b name=/runtime.v1.RuntimeService/Version
	Feb 29 02:48:39 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:39.994714021Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a0f565c-2073-4f12-adbe-f8ca0381146b name=/runtime.v1.RuntimeService/Version
	Feb 29 02:48:40 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:40.002108795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=258b85e8-a440-4c0f-96dc-bd72d9d842c6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:48:40 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:40.002581703Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709174920002557827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=258b85e8-a440-4c0f-96dc-bd72d9d842c6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:48:40 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:40.003389123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8380b70-907e-429a-9251-31e00830b81d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:48:40 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:40.003469117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8380b70-907e-429a-9251-31e00830b81d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:48:40 old-k8s-version-275488 crio[644]: time="2024-02-29 02:48:40.003509243Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d8380b70-907e-429a-9251-31e00830b81d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 02:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052077] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045395] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.718888] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Feb29 02:31] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.696519] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.748716] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.071940] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.086978] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.246454] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.137859] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.350900] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[ +17.818498] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.668154] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[Feb29 02:35] systemd-fstab-generator[8056]: Ignoring "noauto" option for root device
	[  +0.077012] kauditd_printk_skb: 15 callbacks suppressed
	[Feb29 02:37] systemd-fstab-generator[9745]: Ignoring "noauto" option for root device
	[  +0.066300] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:48:40 up 17 min,  0 users,  load average: 0.00, 0.05, 0.10
	Linux old-k8s-version-275488 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 02:48:38 old-k8s-version-275488 kubelet[19058]: F0229 02:48:38.463120   19058 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:48:38 old-k8s-version-275488 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:48:38 old-k8s-version-275488 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:48:39 old-k8s-version-275488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 881.
	Feb 29 02:48:39 old-k8s-version-275488 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:48:39 old-k8s-version-275488 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:48:39 old-k8s-version-275488 kubelet[19078]: I0229 02:48:39.212711   19078 server.go:410] Version: v1.16.0
	Feb 29 02:48:39 old-k8s-version-275488 kubelet[19078]: I0229 02:48:39.212964   19078 plugins.go:100] No cloud provider specified.
	Feb 29 02:48:39 old-k8s-version-275488 kubelet[19078]: I0229 02:48:39.212976   19078 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:48:39 old-k8s-version-275488 kubelet[19078]: I0229 02:48:39.216353   19078 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:48:39 old-k8s-version-275488 kubelet[19078]: W0229 02:48:39.217476   19078 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:48:39 old-k8s-version-275488 kubelet[19078]: F0229 02:48:39.217561   19078 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:48:39 old-k8s-version-275488 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:48:39 old-k8s-version-275488 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:48:39 old-k8s-version-275488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 882.
	Feb 29 02:48:39 old-k8s-version-275488 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:48:39 old-k8s-version-275488 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:48:39 old-k8s-version-275488 kubelet[19118]: I0229 02:48:39.983977   19118 server.go:410] Version: v1.16.0
	Feb 29 02:48:39 old-k8s-version-275488 kubelet[19118]: I0229 02:48:39.984357   19118 plugins.go:100] No cloud provider specified.
	Feb 29 02:48:39 old-k8s-version-275488 kubelet[19118]: I0229 02:48:39.984369   19118 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:48:39 old-k8s-version-275488 kubelet[19118]: I0229 02:48:39.986683   19118 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:48:39 old-k8s-version-275488 kubelet[19118]: W0229 02:48:39.987626   19118 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:48:39 old-k8s-version-275488 kubelet[19118]: F0229 02:48:39.987666   19118 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:48:39 old-k8s-version-275488 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:48:39 old-k8s-version-275488 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275488 -n old-k8s-version-275488
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275488 -n old-k8s-version-275488: exit status 2 (260.378769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-275488" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071485 -n default-k8s-diff-port-071485
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-02-29 02:49:58.900726393 +0000 UTC m=+5956.593873324
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071485 -n default-k8s-diff-port-071485
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-071485 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-071485 logs -n 25: (2.123405282s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-117441 sudo cat                              | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo find                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo crio                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-117441                                       | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	| delete  | -p                                                     | disable-driver-mounts-542968 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | disable-driver-mounts-542968                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:23 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-915633            | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247751             | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071485  | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275488        | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-915633                 | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247751                  | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071485       | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:40 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275488             | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:26:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:26:36.132854  370051 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:26:36.133389  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133407  370051 out.go:304] Setting ErrFile to fd 2...
	I0229 02:26:36.133414  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133912  370051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:26:36.134959  370051 out.go:298] Setting JSON to false
	I0229 02:26:36.135907  370051 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7739,"bootTime":1709165857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:26:36.135982  370051 start.go:139] virtualization: kvm guest
	I0229 02:26:36.137916  370051 out.go:177] * [old-k8s-version-275488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:26:36.139510  370051 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:26:36.139543  370051 notify.go:220] Checking for updates...
	I0229 02:26:36.141206  370051 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:26:36.142776  370051 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:26:36.143982  370051 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:26:36.145097  370051 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:26:36.146170  370051 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:26:36.147751  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:26:36.148198  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.148298  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.163969  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0229 02:26:36.164373  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.164977  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.165003  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.165394  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.165584  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.167312  370051 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 02:26:36.168337  370051 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:26:36.168641  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.168683  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.184089  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0229 02:26:36.184605  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.185181  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.185210  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.185551  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.185723  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.222261  370051 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 02:26:36.223363  370051 start.go:299] selected driver: kvm2
	I0229 02:26:36.223374  370051 start.go:903] validating driver "kvm2" against &{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.223487  370051 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:26:36.224130  370051 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.224195  370051 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:26:36.239302  370051 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:26:36.239664  370051 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:26:36.239741  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:26:36.239755  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:26:36.239765  370051 start_flags.go:323] config:
	{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.239908  370051 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.241466  370051 out.go:177] * Starting control plane node old-k8s-version-275488 in cluster old-k8s-version-275488
	I0229 02:26:35.666509  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:38.738602  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:36.242536  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:26:36.242564  370051 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 02:26:36.242573  370051 cache.go:56] Caching tarball of preloaded images
	I0229 02:26:36.242641  370051 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 02:26:36.242651  370051 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 02:26:36.242742  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:26:36.242905  370051 start.go:365] acquiring machines lock for old-k8s-version-275488: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:26:44.818494  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:47.890482  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:53.970508  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:57.042448  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:03.122506  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:06.194415  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:12.274520  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:15.346558  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:21.426515  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:24.498557  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:30.578502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:33.650482  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:39.730548  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:42.802507  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:48.882487  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:51.954507  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:58.034498  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:01.106530  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:07.186513  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:10.258485  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:16.338519  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:19.410521  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:25.490436  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:28.562555  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:34.642534  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:37.714514  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:43.794519  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:46.866487  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:52.946514  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:56.018488  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:02.098512  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:05.170472  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:11.250485  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:14.322454  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:20.402450  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:23.474533  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:29.554541  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:32.626489  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:38.706558  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:41.778502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:47.858493  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:50.930489  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:57.010541  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:00.082537  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:06.162498  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:09.234502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:12.238620  369591 start.go:369] acquired machines lock for "no-preload-247751" in 4m33.303501223s
	I0229 02:30:12.238705  369591 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:12.238716  369591 fix.go:54] fixHost starting: 
	I0229 02:30:12.239171  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:12.239240  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:12.254984  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I0229 02:30:12.255490  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:12.255991  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:30:12.256012  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:12.256463  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:12.256668  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:12.256840  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:30:12.258341  369591 fix.go:102] recreateIfNeeded on no-preload-247751: state=Stopped err=<nil>
	I0229 02:30:12.258371  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	W0229 02:30:12.258522  369591 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:12.260176  369591 out.go:177] * Restarting existing kvm2 VM for "no-preload-247751" ...
	I0229 02:30:12.261521  369591 main.go:141] libmachine: (no-preload-247751) Calling .Start
	I0229 02:30:12.261678  369591 main.go:141] libmachine: (no-preload-247751) Ensuring networks are active...
	I0229 02:30:12.262375  369591 main.go:141] libmachine: (no-preload-247751) Ensuring network default is active
	I0229 02:30:12.262642  369591 main.go:141] libmachine: (no-preload-247751) Ensuring network mk-no-preload-247751 is active
	I0229 02:30:12.262962  369591 main.go:141] libmachine: (no-preload-247751) Getting domain xml...
	I0229 02:30:12.263526  369591 main.go:141] libmachine: (no-preload-247751) Creating domain...
	I0229 02:30:13.474816  369591 main.go:141] libmachine: (no-preload-247751) Waiting to get IP...
	I0229 02:30:13.475810  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:13.476251  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:13.476305  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:13.476230  370599 retry.go:31] will retry after 302.404435ms: waiting for machine to come up
	I0229 02:30:13.780776  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:13.781237  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:13.781265  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:13.781193  370599 retry.go:31] will retry after 364.673363ms: waiting for machine to come up
	I0229 02:30:12.236310  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:12.236352  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:30:12.238426  369508 machine.go:91] provisioned docker machine in 4m37.406828317s
	I0229 02:30:12.238513  369508 fix.go:56] fixHost completed within 4m37.429140371s
	I0229 02:30:12.238526  369508 start.go:83] releasing machines lock for "embed-certs-915633", held for 4m37.429164063s
	W0229 02:30:12.238553  369508 start.go:694] error starting host: provision: host is not running
	W0229 02:30:12.238763  369508 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0229 02:30:12.238784  369508 start.go:709] Will try again in 5 seconds ...
	I0229 02:30:14.148040  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:14.148530  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:14.148561  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:14.148471  370599 retry.go:31] will retry after 430.606986ms: waiting for machine to come up
	I0229 02:30:14.581180  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:14.581649  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:14.581679  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:14.581598  370599 retry.go:31] will retry after 557.726488ms: waiting for machine to come up
	I0229 02:30:15.141289  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:15.141736  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:15.141767  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:15.141675  370599 retry.go:31] will retry after 611.257074ms: waiting for machine to come up
	I0229 02:30:15.754464  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:15.754802  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:15.754831  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:15.754752  370599 retry.go:31] will retry after 905.484801ms: waiting for machine to come up
	I0229 02:30:16.661691  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:16.662072  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:16.662099  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:16.662020  370599 retry.go:31] will retry after 1.007584217s: waiting for machine to come up
	I0229 02:30:17.671565  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:17.672118  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:17.672159  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:17.672048  370599 retry.go:31] will retry after 933.310317ms: waiting for machine to come up
	I0229 02:30:18.607108  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:18.607473  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:18.607496  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:18.607426  370599 retry.go:31] will retry after 1.135856775s: waiting for machine to come up
	I0229 02:30:17.239210  369508 start.go:365] acquiring machines lock for embed-certs-915633: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:30:19.744656  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:19.745017  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:19.745047  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:19.744969  370599 retry.go:31] will retry after 2.184552748s: waiting for machine to come up
	I0229 02:30:21.932313  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:21.932764  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:21.932794  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:21.932711  370599 retry.go:31] will retry after 2.256573009s: waiting for machine to come up
	I0229 02:30:24.191551  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:24.191987  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:24.192016  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:24.191948  370599 retry.go:31] will retry after 3.0850751s: waiting for machine to come up
	I0229 02:30:27.278526  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:27.278941  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:27.278977  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:27.278914  370599 retry.go:31] will retry after 3.196492358s: waiting for machine to come up
	I0229 02:30:31.627482  369869 start.go:369] acquired machines lock for "default-k8s-diff-port-071485" in 4m6.129938439s
	I0229 02:30:31.627553  369869 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:31.627561  369869 fix.go:54] fixHost starting: 
	I0229 02:30:31.628005  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:31.628052  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:31.645217  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I0229 02:30:31.645607  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:31.646146  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:30:31.646179  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:31.646526  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:31.646754  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:31.646941  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:30:31.648372  369869 fix.go:102] recreateIfNeeded on default-k8s-diff-port-071485: state=Stopped err=<nil>
	I0229 02:30:31.648410  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	W0229 02:30:31.648603  369869 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:31.650778  369869 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-071485" ...
	I0229 02:30:30.479186  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.479664  369591 main.go:141] libmachine: (no-preload-247751) Found IP for machine: 192.168.72.114
	I0229 02:30:30.479694  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has current primary IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.479705  369591 main.go:141] libmachine: (no-preload-247751) Reserving static IP address...
	I0229 02:30:30.480161  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "no-preload-247751", mac: "52:54:00:fa:c1:ec", ip: "192.168.72.114"} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.480199  369591 main.go:141] libmachine: (no-preload-247751) DBG | skip adding static IP to network mk-no-preload-247751 - found existing host DHCP lease matching {name: "no-preload-247751", mac: "52:54:00:fa:c1:ec", ip: "192.168.72.114"}
	I0229 02:30:30.480213  369591 main.go:141] libmachine: (no-preload-247751) Reserved static IP address: 192.168.72.114
	I0229 02:30:30.480233  369591 main.go:141] libmachine: (no-preload-247751) Waiting for SSH to be available...
	I0229 02:30:30.480246  369591 main.go:141] libmachine: (no-preload-247751) DBG | Getting to WaitForSSH function...
	I0229 02:30:30.482557  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.482907  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.482935  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.483110  369591 main.go:141] libmachine: (no-preload-247751) DBG | Using SSH client type: external
	I0229 02:30:30.483136  369591 main.go:141] libmachine: (no-preload-247751) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa (-rw-------)
	I0229 02:30:30.483166  369591 main.go:141] libmachine: (no-preload-247751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:30:30.483180  369591 main.go:141] libmachine: (no-preload-247751) DBG | About to run SSH command:
	I0229 02:30:30.483197  369591 main.go:141] libmachine: (no-preload-247751) DBG | exit 0
	I0229 02:30:30.610329  369591 main.go:141] libmachine: (no-preload-247751) DBG | SSH cmd err, output: <nil>: 
	I0229 02:30:30.610691  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetConfigRaw
	I0229 02:30:30.611393  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:30.614007  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.614393  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.614426  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.614689  369591 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/config.json ...
	I0229 02:30:30.614872  369591 machine.go:88] provisioning docker machine ...
	I0229 02:30:30.614892  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:30.615096  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.615250  369591 buildroot.go:166] provisioning hostname "no-preload-247751"
	I0229 02:30:30.615272  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.615444  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.617525  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.617800  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.617835  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.617898  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.618095  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.618289  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.618424  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.618564  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:30.618790  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:30.618807  369591 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-247751 && echo "no-preload-247751" | sudo tee /etc/hostname
	I0229 02:30:30.740902  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-247751
	
	I0229 02:30:30.740952  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.743879  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.744353  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.744396  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.744584  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.744843  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.745014  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.745197  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.745351  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:30.745525  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:30.745543  369591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-247751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-247751/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-247751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:30:30.867175  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:30.867209  369591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:30:30.867229  369591 buildroot.go:174] setting up certificates
	I0229 02:30:30.867240  369591 provision.go:83] configureAuth start
	I0229 02:30:30.867248  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.867521  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:30.870143  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.870443  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.870464  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.870678  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.872992  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.873434  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.873463  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.873643  369591 provision.go:138] copyHostCerts
	I0229 02:30:30.873713  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:30:30.873740  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:30:30.873830  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:30:30.873937  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:30:30.873948  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:30:30.873992  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:30:30.874070  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:30:30.874080  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:30:30.874110  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:30:30.874240  369591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.no-preload-247751 san=[192.168.72.114 192.168.72.114 localhost 127.0.0.1 minikube no-preload-247751]
	I0229 02:30:30.921711  369591 provision.go:172] copyRemoteCerts
	I0229 02:30:30.921769  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:30:30.921793  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.924128  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.924436  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.924474  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.924628  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.924815  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.924975  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.925073  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.009229  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:30:31.035962  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:30:31.062947  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:30:31.089920  369591 provision.go:86] duration metric: configureAuth took 222.667724ms
	I0229 02:30:31.089947  369591 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:30:31.090145  369591 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:30:31.090256  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.092831  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.093148  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.093192  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.093338  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.093511  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.093699  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.093864  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.094032  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:31.094196  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:31.094211  369591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:30:31.381995  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:30:31.382023  369591 machine.go:91] provisioned docker machine in 767.136363ms
	I0229 02:30:31.382036  369591 start.go:300] post-start starting for "no-preload-247751" (driver="kvm2")
	I0229 02:30:31.382049  369591 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:30:31.382066  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.382560  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:30:31.382596  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.385219  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.385574  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.385602  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.385742  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.385955  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.386091  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.386254  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.469621  369591 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:30:31.474615  369591 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:30:31.474640  369591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:30:31.474702  369591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:30:31.474772  369591 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:30:31.474867  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:30:31.484964  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:31.512459  369591 start.go:303] post-start completed in 130.406384ms
	I0229 02:30:31.512519  369591 fix.go:56] fixHost completed within 19.27376704s
	I0229 02:30:31.512569  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.515169  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.515568  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.515596  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.515717  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.515944  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.516108  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.516260  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.516417  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:31.516592  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:31.516605  369591 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:30:31.627335  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173831.594794890
	
	I0229 02:30:31.627357  369591 fix.go:206] guest clock: 1709173831.594794890
	I0229 02:30:31.627366  369591 fix.go:219] Guest: 2024-02-29 02:30:31.59479489 +0000 UTC Remote: 2024-02-29 02:30:31.512545974 +0000 UTC m=+292.733991044 (delta=82.248916ms)
	I0229 02:30:31.627395  369591 fix.go:190] guest clock delta is within tolerance: 82.248916ms
	I0229 02:30:31.627403  369591 start.go:83] releasing machines lock for "no-preload-247751", held for 19.38873796s
	I0229 02:30:31.627429  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.627713  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:31.630486  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.630930  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.630959  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.631131  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631640  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631830  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631920  369591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:30:31.631983  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.632122  369591 ssh_runner.go:195] Run: cat /version.json
	I0229 02:30:31.632160  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.634658  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.634874  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635050  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.635079  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635348  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.635354  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.635379  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635478  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.635566  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.635633  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.635758  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.635768  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.635934  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.635940  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.719735  369591 ssh_runner.go:195] Run: systemctl --version
	I0229 02:30:31.739831  369591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:30:31.891138  369591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:30:31.899497  369591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:30:31.899569  369591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:30:31.921755  369591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:30:31.921785  369591 start.go:475] detecting cgroup driver to use...
	I0229 02:30:31.921896  369591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:30:31.938157  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:30:31.952761  369591 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:30:31.952834  369591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:30:31.966785  369591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:30:31.980931  369591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:30:32.091879  369591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:30:32.261190  369591 docker.go:233] disabling docker service ...
	I0229 02:30:32.261272  369591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:30:32.278862  369591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:30:32.295382  369591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:30:32.433426  369591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:30:32.557975  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:30:32.573791  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:30:32.595797  369591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:30:32.595848  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.608978  369591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:30:32.609042  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.621681  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.634251  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.647107  369591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:30:32.660478  369591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:30:32.672596  369591 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:30:32.672662  369591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:30:32.688480  369591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:30:32.700769  369591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:30:32.823703  369591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:30:33.004444  369591 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:30:33.004531  369591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:30:33.010801  369591 start.go:543] Will wait 60s for crictl version
	I0229 02:30:33.010862  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.015224  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:30:33.064627  369591 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:30:33.064721  369591 ssh_runner.go:195] Run: crio --version
	I0229 02:30:33.108265  369591 ssh_runner.go:195] Run: crio --version
	I0229 02:30:33.142639  369591 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 02:30:33.144169  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:33.147250  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:33.147609  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:33.147644  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:33.147836  369591 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 02:30:33.153138  369591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:33.169427  369591 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:30:33.169481  369591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:33.214079  369591 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 02:30:33.214113  369591 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:30:33.214193  369591 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:33.214216  369591 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.214252  369591 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.214276  369591 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.214335  369591 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.214323  369591 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.214354  369591 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0229 02:30:33.214241  369591 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.215862  369591 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.215880  369591 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0229 02:30:33.215862  369591 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.215928  369591 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.215947  369591 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:33.216082  369591 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.216136  369591 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.216252  369591 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.348095  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0229 02:30:33.434211  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.496911  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.499249  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.503235  369591 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0229 02:30:33.503274  369591 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.503307  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.507506  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.548265  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.551287  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.589427  369591 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0229 02:30:33.589474  369591 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.589523  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.590660  369591 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0229 02:30:33.590688  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.590708  369591 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.590763  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.636886  369591 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0229 02:30:33.636934  369591 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.637001  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.664221  369591 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0229 02:30:33.664266  369591 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.664316  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.691890  369591 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0229 02:30:33.691945  369591 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.691978  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.691993  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.692003  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.692096  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.692107  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.692104  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.692165  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.793616  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:33.793708  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.793723  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:33.793772  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:33.793839  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0229 02:30:33.793853  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:33.793856  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0229 02:30:33.793884  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0229 02:30:33.793902  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.793910  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:33.793914  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:33.793936  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:31.652037  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Start
	I0229 02:30:31.652202  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring networks are active...
	I0229 02:30:31.652984  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring network default is active
	I0229 02:30:31.653457  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring network mk-default-k8s-diff-port-071485 is active
	I0229 02:30:31.653909  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Getting domain xml...
	I0229 02:30:31.654724  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Creating domain...
	I0229 02:30:32.911561  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting to get IP...
	I0229 02:30:32.912505  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:32.912932  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:32.913032  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:32.912928  370716 retry.go:31] will retry after 285.213813ms: waiting for machine to come up
	I0229 02:30:33.199327  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.199733  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.199764  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.199678  370716 retry.go:31] will retry after 334.890426ms: waiting for machine to come up
	I0229 02:30:33.536492  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.536976  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.537006  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.536924  370716 retry.go:31] will retry after 344.946846ms: waiting for machine to come up
	I0229 02:30:33.883432  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.883911  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.883941  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.883858  370716 retry.go:31] will retry after 516.135135ms: waiting for machine to come up
	I0229 02:30:34.401167  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.401592  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.401621  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:34.401543  370716 retry.go:31] will retry after 538.013174ms: waiting for machine to come up
	I0229 02:30:34.941529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.942080  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.942116  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:34.942039  370716 retry.go:31] will retry after 883.013858ms: waiting for machine to come up
	I0229 02:30:33.850786  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0229 02:30:33.850868  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0229 02:30:33.850977  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:34.154343  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:36.987957  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (3.194013383s)
	I0229 02:30:36.987999  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0229 02:30:36.988100  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.194139784s)
	I0229 02:30:36.988127  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0229 02:30:36.988148  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.194207246s)
	I0229 02:30:36.988178  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0229 02:30:36.988156  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:36.988191  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.194323563s)
	I0229 02:30:36.988206  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0229 02:30:36.988236  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:36.988269  369591 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.833890629s)
	I0229 02:30:36.988240  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.13724749s)
	I0229 02:30:36.988310  369591 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0229 02:30:36.988331  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0229 02:30:36.988343  369591 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:36.988375  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:36.993483  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:38.351556  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.363290185s)
	I0229 02:30:38.351599  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0229 02:30:38.351633  369591 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:38.351632  369591 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.358113254s)
	I0229 02:30:38.351686  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0229 02:30:38.351705  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:38.351782  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:35.827402  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:35.827906  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:35.827932  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:35.827872  370716 retry.go:31] will retry after 902.653821ms: waiting for machine to come up
	I0229 02:30:36.732470  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:36.732925  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:36.732957  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:36.732863  370716 retry.go:31] will retry after 1.322376383s: waiting for machine to come up
	I0229 02:30:38.057306  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:38.057842  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:38.057874  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:38.057790  370716 retry.go:31] will retry after 1.16249498s: waiting for machine to come up
	I0229 02:30:39.221714  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:39.222197  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:39.222236  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:39.222156  370716 retry.go:31] will retry after 1.912383064s: waiting for machine to come up
	I0229 02:30:42.350149  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.998331984s)
	I0229 02:30:42.350198  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0229 02:30:42.350214  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.99848453s)
	I0229 02:30:42.350266  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0229 02:30:42.350305  369591 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:42.350357  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:41.135736  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:41.136113  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:41.136144  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:41.136058  370716 retry.go:31] will retry after 2.823296742s: waiting for machine to come up
	I0229 02:30:43.960885  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:43.961677  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:43.961703  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:43.961582  370716 retry.go:31] will retry after 3.266272258s: waiting for machine to come up
	I0229 02:30:44.528869  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.178478896s)
	I0229 02:30:44.528915  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0229 02:30:44.528947  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:44.529014  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:46.991074  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462030604s)
	I0229 02:30:46.991103  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0229 02:30:46.991129  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:46.991195  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:47.229005  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:47.229478  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:47.229511  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:47.229417  370716 retry.go:31] will retry after 3.429712893s: waiting for machine to come up
	I0229 02:30:51.887858  370051 start.go:369] acquired machines lock for "old-k8s-version-275488" in 4m15.644916266s
	I0229 02:30:51.887935  370051 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:51.887944  370051 fix.go:54] fixHost starting: 
	I0229 02:30:51.888374  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:51.888428  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:51.905851  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0229 02:30:51.906292  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:51.906778  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:30:51.906806  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:51.907250  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:51.907459  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:30:51.907631  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetState
	I0229 02:30:51.909061  370051 fix.go:102] recreateIfNeeded on old-k8s-version-275488: state=Stopped err=<nil>
	I0229 02:30:51.909093  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	W0229 02:30:51.909251  370051 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:51.911318  370051 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275488" ...
	I0229 02:30:50.662939  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.663341  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Found IP for machine: 192.168.61.233
	I0229 02:30:50.663366  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Reserving static IP address...
	I0229 02:30:50.663404  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has current primary IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.663745  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-071485", mac: "52:54:00:81:f9:08", ip: "192.168.61.233"} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.663781  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Reserved static IP address: 192.168.61.233
	I0229 02:30:50.663804  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | skip adding static IP to network mk-default-k8s-diff-port-071485 - found existing host DHCP lease matching {name: "default-k8s-diff-port-071485", mac: "52:54:00:81:f9:08", ip: "192.168.61.233"}
	I0229 02:30:50.663819  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for SSH to be available...
	I0229 02:30:50.663830  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Getting to WaitForSSH function...
	I0229 02:30:50.665924  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.666270  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.666306  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.666411  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Using SSH client type: external
	I0229 02:30:50.666435  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa (-rw-------)
	I0229 02:30:50.666464  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:30:50.666477  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | About to run SSH command:
	I0229 02:30:50.666489  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | exit 0
	I0229 02:30:50.794598  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | SSH cmd err, output: <nil>: 
	I0229 02:30:50.795011  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetConfigRaw
	I0229 02:30:50.795753  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:50.798443  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.798796  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.798822  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.799151  369869 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/config.json ...
	I0229 02:30:50.799410  369869 machine.go:88] provisioning docker machine ...
	I0229 02:30:50.799440  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:50.799684  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:50.799937  369869 buildroot.go:166] provisioning hostname "default-k8s-diff-port-071485"
	I0229 02:30:50.799963  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:50.800129  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:50.802457  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.802786  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.802813  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.802923  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:50.803087  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.803281  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.803393  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:50.803527  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:50.803744  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:50.803757  369869 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-071485 && echo "default-k8s-diff-port-071485" | sudo tee /etc/hostname
	I0229 02:30:50.930812  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-071485
	
	I0229 02:30:50.930849  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:50.933650  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.934017  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.934057  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.934217  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:50.934458  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.934651  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.934813  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:50.934964  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:50.935141  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:50.935159  369869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-071485' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-071485/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-071485' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:30:51.057233  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:51.057266  369869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:30:51.057307  369869 buildroot.go:174] setting up certificates
	I0229 02:30:51.057321  369869 provision.go:83] configureAuth start
	I0229 02:30:51.057335  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:51.057615  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:51.060233  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.060563  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.060595  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.060707  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.062583  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.062889  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.062938  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.063065  369869 provision.go:138] copyHostCerts
	I0229 02:30:51.063121  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:30:51.063140  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:30:51.063193  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:30:51.063290  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:30:51.063304  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:30:51.063332  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:30:51.063396  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:30:51.063403  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:30:51.063420  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:30:51.063482  369869 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-071485 san=[192.168.61.233 192.168.61.233 localhost 127.0.0.1 minikube default-k8s-diff-port-071485]
	I0229 02:30:51.180356  369869 provision.go:172] copyRemoteCerts
	I0229 02:30:51.180417  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:30:51.180446  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.182981  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.183262  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.183295  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.183465  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.183656  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.183814  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.183958  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.270548  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:30:51.297136  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 02:30:51.323133  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:30:51.349241  369869 provision.go:86] duration metric: configureAuth took 291.905825ms
	I0229 02:30:51.349269  369869 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:30:51.349453  369869 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:30:51.349529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.352119  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.352473  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.352503  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.352658  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.352839  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.353009  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.353122  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.353304  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:51.353480  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:51.353495  369869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:30:51.639987  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:30:51.640022  369869 machine.go:91] provisioned docker machine in 840.591751ms
	I0229 02:30:51.640041  369869 start.go:300] post-start starting for "default-k8s-diff-port-071485" (driver="kvm2")
	I0229 02:30:51.640057  369869 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:30:51.640087  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.640450  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:30:51.640486  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.643118  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.643427  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.643464  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.643661  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.643871  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.644025  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.644164  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.730150  369869 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:30:51.735109  369869 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:30:51.735135  369869 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:30:51.735207  369869 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:30:51.735298  369869 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:30:51.735416  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:30:51.745416  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:51.771727  369869 start.go:303] post-start completed in 131.66845ms
	I0229 02:30:51.771756  369869 fix.go:56] fixHost completed within 20.144195498s
	I0229 02:30:51.771782  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.774300  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.774582  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.774610  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.774744  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.774972  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.775153  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.775295  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.775481  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:51.775648  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:51.775659  369869 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:30:51.887656  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173851.865903243
	
	I0229 02:30:51.887683  369869 fix.go:206] guest clock: 1709173851.865903243
	I0229 02:30:51.887691  369869 fix.go:219] Guest: 2024-02-29 02:30:51.865903243 +0000 UTC Remote: 2024-02-29 02:30:51.771760886 +0000 UTC m=+266.432013426 (delta=94.142357ms)
	I0229 02:30:51.887738  369869 fix.go:190] guest clock delta is within tolerance: 94.142357ms
	I0229 02:30:51.887744  369869 start.go:83] releasing machines lock for "default-k8s-diff-port-071485", held for 20.260217484s
	I0229 02:30:51.887771  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.888047  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:51.890930  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.891264  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.891294  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.891491  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892002  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892209  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892299  369869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:30:51.892370  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.892472  369869 ssh_runner.go:195] Run: cat /version.json
	I0229 02:30:51.892503  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.895178  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895415  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895591  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.895626  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895769  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.895800  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895820  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.895966  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.896055  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.896141  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.896212  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.896277  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.896367  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.896447  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.976085  369869 ssh_runner.go:195] Run: systemctl --version
	I0229 02:30:52.001946  369869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:30:52.156753  369869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:30:52.164196  369869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:30:52.164302  369869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:30:52.189176  369869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:30:52.189201  369869 start.go:475] detecting cgroup driver to use...
	I0229 02:30:52.189281  369869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:30:52.207647  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:30:52.223752  369869 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:30:52.223842  369869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:30:52.246026  369869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:30:52.262180  369869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:30:52.409077  369869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:30:52.583777  369869 docker.go:233] disabling docker service ...
	I0229 02:30:52.583850  369869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:30:52.601434  369869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:30:52.617382  369869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:30:52.757258  369869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:30:52.898036  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:30:52.915787  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:30:52.939344  369869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:30:52.939417  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.951659  369869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:30:52.951722  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.963072  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.974800  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.986490  369869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:30:52.998630  369869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:30:53.009783  369869 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:30:53.009862  369869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:30:53.026356  369869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:30:53.038720  369869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:30:53.171220  369869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:30:53.326032  369869 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:30:53.326102  369869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:30:53.332369  369869 start.go:543] Will wait 60s for crictl version
	I0229 02:30:53.332431  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:30:53.336784  369869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:30:53.378780  369869 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:30:53.378902  369869 ssh_runner.go:195] Run: crio --version
	I0229 02:30:53.411158  369869 ssh_runner.go:195] Run: crio --version
	I0229 02:30:53.447038  369869 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 02:30:49.053324  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.062103665s)
	I0229 02:30:49.053353  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0229 02:30:49.053378  369591 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:49.053426  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:49.910791  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0229 02:30:49.910854  369591 cache_images.go:123] Successfully loaded all cached images
	I0229 02:30:49.910862  369591 cache_images.go:92] LoadImages completed in 16.696734078s
	I0229 02:30:49.910994  369591 ssh_runner.go:195] Run: crio config
	I0229 02:30:49.961413  369591 cni.go:84] Creating CNI manager for ""
	I0229 02:30:49.961435  369591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:30:49.961456  369591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:30:49.961509  369591 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.114 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-247751 NodeName:no-preload-247751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:30:49.961701  369591 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-247751"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:30:49.961801  369591 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-247751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:30:49.961866  369591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 02:30:49.973105  369591 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:30:49.973170  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:30:49.983178  369591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0229 02:30:50.001511  369591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 02:30:50.019574  369591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0229 02:30:50.037993  369591 ssh_runner.go:195] Run: grep 192.168.72.114	control-plane.minikube.internal$ /etc/hosts
	I0229 02:30:50.042075  369591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:50.054761  369591 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751 for IP: 192.168.72.114
	I0229 02:30:50.054796  369591 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:30:50.054976  369591 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:30:50.055031  369591 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:30:50.055146  369591 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/client.key
	I0229 02:30:50.055243  369591 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.key.9adeb8c5
	I0229 02:30:50.055310  369591 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.key
	I0229 02:30:50.055440  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:30:50.055481  369591 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:30:50.055502  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:30:50.055542  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:30:50.055577  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:30:50.055658  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:30:50.055728  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:50.056454  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:30:50.083764  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:30:50.110733  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:30:50.139180  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:30:50.167000  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:30:50.194044  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:30:50.220671  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:30:50.247561  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:30:50.274577  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:30:50.300997  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:30:50.327718  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:30:50.355463  369591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:30:50.374921  369591 ssh_runner.go:195] Run: openssl version
	I0229 02:30:50.381614  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:30:50.393546  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.398532  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.398594  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.404719  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:30:50.416507  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:30:50.428072  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.433031  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.433106  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.439174  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:30:50.450778  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:30:50.462238  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.467219  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.467269  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.473395  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:30:50.484817  369591 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:30:50.489643  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:30:50.496274  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:30:50.502579  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:30:50.508665  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:30:50.514827  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:30:50.520958  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:30:50.527032  369591 kubeadm.go:404] StartCluster: {Name:no-preload-247751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-247751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:30:50.527147  369591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:30:50.527194  369591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:30:50.565834  369591 cri.go:89] found id: ""
	I0229 02:30:50.565931  369591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:30:50.577305  369591 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:30:50.577354  369591 kubeadm.go:636] restartCluster start
	I0229 02:30:50.577408  369591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:30:50.587881  369591 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:50.588896  369591 kubeconfig.go:92] found "no-preload-247751" server: "https://192.168.72.114:8443"
	I0229 02:30:50.591223  369591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:30:50.601374  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:50.601434  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:50.613730  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:51.102422  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:51.102539  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:51.116483  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:51.601564  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:51.601657  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:51.615481  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:52.102039  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:52.102123  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:52.121300  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:52.601999  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:52.602093  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:52.618701  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.102291  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:53.102403  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:53.117898  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.602410  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:53.602496  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:53.618760  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.448437  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:53.451649  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:53.451998  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:53.452052  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:53.452302  369869 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 02:30:53.458709  369869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:53.477744  369869 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:30:53.477831  369869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:53.527511  369869 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 02:30:53.527593  369869 ssh_runner.go:195] Run: which lz4
	I0229 02:30:53.532370  369869 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:30:53.537149  369869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:30:53.537179  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 02:30:51.912520  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .Start
	I0229 02:30:51.912688  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring networks are active...
	I0229 02:30:51.913511  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network default is active
	I0229 02:30:51.913929  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network mk-old-k8s-version-275488 is active
	I0229 02:30:51.914378  370051 main.go:141] libmachine: (old-k8s-version-275488) Getting domain xml...
	I0229 02:30:51.915191  370051 main.go:141] libmachine: (old-k8s-version-275488) Creating domain...
	I0229 02:30:53.179261  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting to get IP...
	I0229 02:30:53.180359  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.180800  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.180922  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.180789  370858 retry.go:31] will retry after 282.360524ms: waiting for machine to come up
	I0229 02:30:53.465135  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.465708  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.465742  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.465651  370858 retry.go:31] will retry after 341.876004ms: waiting for machine to come up
	I0229 02:30:53.809322  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.809734  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.809876  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.809797  370858 retry.go:31] will retry after 356.208548ms: waiting for machine to come up
	I0229 02:30:54.167329  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.167824  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.167852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.167760  370858 retry.go:31] will retry after 395.76503ms: waiting for machine to come up
	I0229 02:30:54.565496  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.565976  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.566004  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.565933  370858 retry.go:31] will retry after 617.898012ms: waiting for machine to come up
	I0229 02:30:55.185679  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:55.186193  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:55.186237  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:55.186144  370858 retry.go:31] will retry after 911.947678ms: waiting for machine to come up
	I0229 02:30:56.099334  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:56.099788  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:56.099815  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:56.099726  370858 retry.go:31] will retry after 1.132066509s: waiting for machine to come up
	I0229 02:30:54.102304  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:54.102485  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:54.123193  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:54.601763  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:54.601890  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:54.621846  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.102417  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:55.102503  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:55.129010  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.601478  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:55.601532  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:55.620169  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:56.101701  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:56.101776  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:56.121369  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:56.601447  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:56.601550  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:56.617079  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.101509  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:57.101648  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:57.121691  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.601658  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:57.601754  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:57.620357  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:58.101829  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:58.101921  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:58.115818  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:58.602403  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:58.602509  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:58.621857  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.599398  369869 crio.go:444] Took 2.067052 seconds to copy over tarball
	I0229 02:30:55.599501  369869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:30:58.543850  369869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944309258s)
	I0229 02:30:58.543884  369869 crio.go:451] Took 2.944447 seconds to extract the tarball
	I0229 02:30:58.543896  369869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:30:58.592492  369869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:58.751479  369869 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:30:58.751509  369869 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:30:58.751576  369869 ssh_runner.go:195] Run: crio config
	I0229 02:30:58.813487  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:30:58.813515  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:30:58.813540  369869 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:30:58.813566  369869 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.233 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-071485 NodeName:default-k8s-diff-port-071485 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:30:58.813785  369869 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.233
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-071485"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:30:58.813898  369869 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-071485 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-071485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0229 02:30:58.813971  369869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:30:58.826199  369869 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:30:58.826324  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:30:58.837384  369869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0229 02:30:58.856023  369869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:30:58.876432  369869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0229 02:30:58.900684  369869 ssh_runner.go:195] Run: grep 192.168.61.233	control-plane.minikube.internal$ /etc/hosts
	I0229 02:30:58.905249  369869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:58.920007  369869 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485 for IP: 192.168.61.233
	I0229 02:30:58.920046  369869 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:30:58.920249  369869 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:30:58.920319  369869 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:30:58.920432  369869 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/client.key
	I0229 02:30:58.995037  369869 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.key.b3fc8ab0
	I0229 02:30:58.995173  369869 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.key
	I0229 02:30:58.995377  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:30:58.995430  369869 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:30:58.995451  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:30:58.995503  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:30:58.995543  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:30:58.995590  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:30:58.995653  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:58.996607  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:30:59.026487  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:30:59.054725  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:30:59.082553  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:30:59.110374  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:30:59.141972  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:30:59.170097  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:30:59.201206  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:30:59.232790  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:30:59.263940  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:30:59.292401  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:30:59.321920  369869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:30:59.343921  369869 ssh_runner.go:195] Run: openssl version
	I0229 02:30:59.351308  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:30:59.364059  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.369212  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.369302  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.375683  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:30:59.389046  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:30:59.404101  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.409433  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.409491  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.416126  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:30:59.429674  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:30:59.443405  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.448931  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.448991  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.455800  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:30:59.469013  369869 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:30:59.474745  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:30:59.481689  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:30:59.488868  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:30:59.496380  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:30:59.503593  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:30:59.510485  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:30:59.517770  369869 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-071485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-071485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.233 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:30:59.517894  369869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:30:59.517941  369869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:30:59.564631  369869 cri.go:89] found id: ""
	I0229 02:30:59.564718  369869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:30:59.578812  369869 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:30:59.578881  369869 kubeadm.go:636] restartCluster start
	I0229 02:30:59.578954  369869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:30:59.592900  369869 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:59.593909  369869 kubeconfig.go:92] found "default-k8s-diff-port-071485" server: "https://192.168.61.233:8444"
	I0229 02:30:59.596083  369869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:30:59.609384  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.609466  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.625617  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.110139  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.110282  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.127301  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.233610  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:57.234113  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:57.234145  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:57.234063  370858 retry.go:31] will retry after 1.238348525s: waiting for machine to come up
	I0229 02:30:58.474146  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:58.474696  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:58.474733  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:58.474642  370858 retry.go:31] will retry after 1.373712981s: waiting for machine to come up
	I0229 02:30:59.850075  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:59.850504  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:59.850526  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:59.850460  370858 retry.go:31] will retry after 2.156069813s: waiting for machine to come up
	I0229 02:30:59.101727  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.101812  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.120465  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:59.602060  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.602155  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.620588  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.102108  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.102203  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.120822  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.602443  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.602545  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.616796  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.616835  369591 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:00.616858  369591 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:00.616873  369591 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:00.616940  369591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:00.661747  369591 cri.go:89] found id: ""
	I0229 02:31:00.661869  369591 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:00.684098  369591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:00.696989  369591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:00.697059  369591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:00.708553  369591 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:00.708583  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:00.827929  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.578572  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.818119  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.892891  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.964926  369591 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:01.965037  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:02.466098  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:02.965290  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:03.465897  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:03.483060  369591 api_server.go:72] duration metric: took 1.518135432s to wait for apiserver process to appear ...
	I0229 02:31:03.483103  369591 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:03.483127  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:00.610179  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.610299  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.630460  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:01.109543  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:01.109680  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:01.129578  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:01.610203  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:01.610301  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:01.630078  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.109835  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:02.109945  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:02.127400  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.610160  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:02.610269  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:02.630581  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:03.109702  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:03.109836  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:03.129754  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:03.610303  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:03.610389  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:03.629702  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:04.110325  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:04.110459  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:04.128740  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:04.610305  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:04.610403  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:04.624716  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:05.110349  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:05.110457  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:05.130070  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.007911  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:02.008381  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:02.008409  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:02.008330  370858 retry.go:31] will retry after 1.864134048s: waiting for machine to come up
	I0229 02:31:03.873997  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:03.874606  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:03.874653  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:03.874547  370858 retry.go:31] will retry after 2.45659808s: waiting for machine to come up
	I0229 02:31:06.111554  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:06.111581  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:06.111596  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.191055  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:06.191090  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:06.483401  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.489220  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:06.489254  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:06.983921  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.988354  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:06.988430  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:07.483305  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:07.489830  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0229 02:31:07.497146  369591 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:31:07.497187  369591 api_server.go:131] duration metric: took 4.014075718s to wait for apiserver health ...
	I0229 02:31:07.497201  369591 cni.go:84] Creating CNI manager for ""
	I0229 02:31:07.497210  369591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:07.498785  369591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:07.500032  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:31:07.530625  369591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:31:07.594249  369591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:31:07.604940  369591 system_pods.go:59] 8 kube-system pods found
	I0229 02:31:07.604973  369591 system_pods.go:61] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:31:07.604980  369591 system_pods.go:61] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:31:07.604989  369591 system_pods.go:61] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:31:07.604995  369591 system_pods.go:61] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:31:07.605003  369591 system_pods.go:61] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:31:07.605015  369591 system_pods.go:61] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:31:07.605022  369591 system_pods.go:61] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:31:07.605032  369591 system_pods.go:61] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:31:07.605052  369591 system_pods.go:74] duration metric: took 10.776743ms to wait for pod list to return data ...
	I0229 02:31:07.605061  369591 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:31:07.608034  369591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:31:07.608059  369591 node_conditions.go:123] node cpu capacity is 2
	I0229 02:31:07.608073  369591 node_conditions.go:105] duration metric: took 3.004467ms to run NodePressure ...
	I0229 02:31:07.608096  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:07.975871  369591 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:31:07.980949  369591 kubeadm.go:787] kubelet initialised
	I0229 02:31:07.980970  369591 kubeadm.go:788] duration metric: took 5.071971ms waiting for restarted kubelet to initialise ...
	I0229 02:31:07.980979  369591 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:07.986764  369591 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:07.992673  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "coredns-76f75df574-2z5w8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.992698  369591 pod_ready.go:81] duration metric: took 5.911106ms waiting for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:07.992707  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "coredns-76f75df574-2z5w8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.992717  369591 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:07.997300  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "etcd-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.997322  369591 pod_ready.go:81] duration metric: took 4.594827ms waiting for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:07.997330  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "etcd-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.997335  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.004032  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-apiserver-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.004052  369591 pod_ready.go:81] duration metric: took 6.71117ms waiting for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.004060  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-apiserver-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.004066  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.009947  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.009985  369591 pod_ready.go:81] duration metric: took 5.909502ms waiting for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.010001  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.010009  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.398938  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-proxy-cdc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.398965  369591 pod_ready.go:81] duration metric: took 388.944943ms waiting for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.398975  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-proxy-cdc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.398982  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.797706  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-scheduler-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.797733  369591 pod_ready.go:81] duration metric: took 398.745142ms waiting for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.797744  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-scheduler-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.797751  369591 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:09.198467  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:09.198496  369591 pod_ready.go:81] duration metric: took 400.737315ms waiting for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:09.198506  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:09.198511  369591 pod_ready.go:38] duration metric: took 1.217523271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:09.198530  369591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:31:09.211194  369591 ops.go:34] apiserver oom_adj: -16
	I0229 02:31:09.211222  369591 kubeadm.go:640] restartCluster took 18.633858862s
	I0229 02:31:09.211232  369591 kubeadm.go:406] StartCluster complete in 18.684207766s
	I0229 02:31:09.211263  369591 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:09.211346  369591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:31:09.212899  369591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:09.213213  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:31:09.213318  369591 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:31:09.213406  369591 addons.go:69] Setting storage-provisioner=true in profile "no-preload-247751"
	I0229 02:31:09.213426  369591 addons.go:69] Setting default-storageclass=true in profile "no-preload-247751"
	I0229 02:31:09.213446  369591 addons.go:69] Setting metrics-server=true in profile "no-preload-247751"
	I0229 02:31:09.213463  369591 addons.go:234] Setting addon metrics-server=true in "no-preload-247751"
	I0229 02:31:09.213465  369591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-247751"
	I0229 02:31:09.213463  369591 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0229 02:31:09.213472  369591 addons.go:243] addon metrics-server should already be in state true
	I0229 02:31:09.213435  369591 addons.go:234] Setting addon storage-provisioner=true in "no-preload-247751"
	W0229 02:31:09.213515  369591 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:31:09.213519  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.213541  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.213915  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213924  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213944  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.213944  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.213943  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213978  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.218976  369591 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-247751" context rescaled to 1 replicas
	I0229 02:31:09.219015  369591 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:31:09.220657  369591 out.go:177] * Verifying Kubernetes components...
	I0229 02:31:09.221954  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:31:09.230064  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0229 02:31:09.230528  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.231030  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.231053  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.231526  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.231762  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.233032  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0229 02:31:09.233487  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.233929  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I0229 02:31:09.234003  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.234028  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.234293  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.234406  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.234784  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.234811  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.235009  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.235068  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.235163  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.235631  369591 addons.go:234] Setting addon default-storageclass=true in "no-preload-247751"
	W0229 02:31:09.235651  369591 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:31:09.235679  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.235738  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.235772  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.236123  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.236157  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.250756  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0229 02:31:09.251190  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.251830  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.251855  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.252228  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.252403  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.254210  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.256240  369591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:09.257522  369591 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:31:09.257537  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:31:09.257552  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.255418  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0229 02:31:09.255485  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0229 02:31:09.258003  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.258129  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.258432  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.258457  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.258664  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.258676  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.258822  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.258983  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.259278  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.259313  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.259533  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.261295  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.261320  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.262706  369591 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:31:05.610163  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:05.610319  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:05.627782  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:06.110424  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:06.110521  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:06.129628  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:06.610193  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:06.610330  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:06.624176  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:07.110249  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:07.110354  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:07.129955  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:07.609462  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:07.609536  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:07.623687  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:08.110263  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:08.110407  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:08.126900  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:08.610447  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:08.610520  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:08.625182  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.109675  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:09.109759  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:09.124637  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.610399  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:09.610520  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:09.630681  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.630715  369869 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:09.630757  369869 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:09.630777  369869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:09.630844  369869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:09.683876  369869 cri.go:89] found id: ""
	I0229 02:31:09.683963  369869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:09.706059  369869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:09.719868  369869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:09.719939  369869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:09.734591  369869 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:09.734622  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:09.862689  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:09.263808  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:31:09.263830  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:31:09.263849  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.261760  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.261947  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.263890  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.264339  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.264522  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.264704  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.266885  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.267339  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.267358  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.267533  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.267649  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.267782  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.267862  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.302813  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I0229 02:31:09.303329  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.303878  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.303909  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.304305  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.304509  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.306147  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.306434  369591 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:31:09.306454  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:31:09.306472  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.309029  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.309345  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.309382  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.309670  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.309872  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.310048  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.310193  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.402579  369591 node_ready.go:35] waiting up to 6m0s for node "no-preload-247751" to be "Ready" ...
	I0229 02:31:09.402756  369591 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:31:09.420259  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:31:09.426629  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:31:09.426655  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:31:09.446028  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:31:09.457219  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:31:09.457244  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:31:09.504028  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:31:09.504054  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:31:09.554137  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:31:10.485560  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.039492326s)
	I0229 02:31:10.485633  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.485646  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.485928  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065634917s)
	I0229 02:31:10.485970  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.485986  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.486053  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.486072  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.486092  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.486104  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.486112  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.486254  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.486287  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.486304  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.486320  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.487538  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.487556  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.487566  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.487543  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.487582  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.487579  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.494355  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.494374  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.494614  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.494635  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.494633  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.559201  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.005004802s)
	I0229 02:31:10.559258  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.559276  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.559592  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.559614  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.559625  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.559633  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.559899  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.559915  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.559926  369591 addons.go:470] Verifying addon metrics-server=true in "no-preload-247751"
	I0229 02:31:10.561833  369591 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:31:06.333259  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:06.333776  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:06.333811  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:06.333733  370858 retry.go:31] will retry after 3.223893936s: waiting for machine to come up
	I0229 02:31:09.559349  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:09.559937  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:09.559968  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:09.559891  370858 retry.go:31] will retry after 5.278822831s: waiting for machine to come up
	I0229 02:31:10.560171  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.563240  369591 addons.go:505] enable addons completed in 1.349905679s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:31:11.408006  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:10.805438  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.016546  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.132323  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.212201  369869 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:11.212309  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:11.713366  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.212866  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.713327  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.732027  369869 api_server.go:72] duration metric: took 1.519826457s to wait for apiserver process to appear ...
	I0229 02:31:12.732056  369869 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:12.732078  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.109299  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:15.109349  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:15.109368  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.166169  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:15.166209  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:15.232359  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.267052  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:15.267099  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.096073  369508 start.go:369] acquired machines lock for "embed-certs-915633" in 58.856797615s
	I0229 02:31:16.096132  369508 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:31:16.096144  369508 fix.go:54] fixHost starting: 
	I0229 02:31:16.096651  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:16.096692  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:16.115912  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0229 02:31:16.116419  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:16.116967  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:31:16.116999  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:16.117362  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:16.117562  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:16.117742  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:31:16.119589  369508 fix.go:102] recreateIfNeeded on embed-certs-915633: state=Stopped err=<nil>
	I0229 02:31:16.119614  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	W0229 02:31:16.119809  369508 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:31:16.121566  369508 out.go:177] * Restarting existing kvm2 VM for "embed-certs-915633" ...
	I0229 02:31:14.842498  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843049  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has current primary IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843083  370051 main.go:141] libmachine: (old-k8s-version-275488) Found IP for machine: 192.168.39.160
	I0229 02:31:14.843112  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserving static IP address...
	I0229 02:31:14.843485  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.843510  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | skip adding static IP to network mk-old-k8s-version-275488 - found existing host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"}
	I0229 02:31:14.843525  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserved static IP address: 192.168.39.160
	I0229 02:31:14.843535  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Getting to WaitForSSH function...
	I0229 02:31:14.843553  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting for SSH to be available...
	I0229 02:31:14.845739  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846087  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.846120  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846289  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH client type: external
	I0229 02:31:14.846319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa (-rw-------)
	I0229 02:31:14.846355  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:31:14.846372  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | About to run SSH command:
	I0229 02:31:14.846390  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | exit 0
	I0229 02:31:14.979384  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | SSH cmd err, output: <nil>: 
	I0229 02:31:14.979896  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetConfigRaw
	I0229 02:31:14.980716  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:14.983852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984278  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.984319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984639  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:31:14.984865  370051 machine.go:88] provisioning docker machine ...
	I0229 02:31:14.984890  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:14.985140  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985324  370051 buildroot.go:166] provisioning hostname "old-k8s-version-275488"
	I0229 02:31:14.985347  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985494  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:14.988036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988438  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.988464  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988633  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:14.988829  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989003  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989174  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:14.989361  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:14.989604  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:14.989621  370051 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275488 && echo "old-k8s-version-275488" | sudo tee /etc/hostname
	I0229 02:31:15.125564  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275488
	
	I0229 02:31:15.125605  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.128963  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129570  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.129652  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129735  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.129996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130185  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130380  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.130616  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.130872  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.130900  370051 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275488/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:15.272298  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:15.272337  370051 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:31:15.272368  370051 buildroot.go:174] setting up certificates
	I0229 02:31:15.272385  370051 provision.go:83] configureAuth start
	I0229 02:31:15.272402  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:15.272772  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:15.276382  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.276838  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.276869  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.277051  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.279927  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280298  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.280326  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280505  370051 provision.go:138] copyHostCerts
	I0229 02:31:15.280555  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:31:15.280566  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:31:15.280619  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:31:15.280749  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:31:15.280764  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:31:15.280789  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:31:15.280857  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:31:15.280871  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:31:15.280891  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:31:15.280954  370051 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275488 san=[192.168.39.160 192.168.39.160 localhost 127.0.0.1 minikube old-k8s-version-275488]
	I0229 02:31:15.360428  370051 provision.go:172] copyRemoteCerts
	I0229 02:31:15.360487  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:31:15.360512  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.363540  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.363931  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.363966  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.364154  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.364337  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.364495  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.364622  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.453643  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:31:15.483233  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 02:31:15.512164  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:31:15.543453  370051 provision.go:86] duration metric: configureAuth took 271.048547ms
	I0229 02:31:15.543484  370051 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:31:15.543705  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:31:15.543816  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.546472  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.546807  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.546835  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.547049  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.547272  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547455  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547662  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.547861  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.548035  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.548052  370051 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:31:15.835533  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:31:15.835572  370051 machine.go:91] provisioned docker machine in 850.691497ms
	I0229 02:31:15.835589  370051 start.go:300] post-start starting for "old-k8s-version-275488" (driver="kvm2")
	I0229 02:31:15.835604  370051 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:31:15.835635  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:15.835995  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:31:15.836025  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.838946  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839297  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.839330  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839460  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.839665  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.839839  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.840008  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.925849  370051 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:31:15.931227  370051 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:31:15.931260  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:31:15.931363  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:31:15.931465  370051 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:31:15.931574  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:31:15.942500  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:15.972803  370051 start.go:303] post-start completed in 137.19736ms
	I0229 02:31:15.972838  370051 fix.go:56] fixHost completed within 24.084893996s
	I0229 02:31:15.972873  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.975698  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976063  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.976093  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976279  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.976518  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976659  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976795  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.976959  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.977119  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.977130  370051 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:31:16.095892  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173876.041987567
	
	I0229 02:31:16.095917  370051 fix.go:206] guest clock: 1709173876.041987567
	I0229 02:31:16.095927  370051 fix.go:219] Guest: 2024-02-29 02:31:16.041987567 +0000 UTC Remote: 2024-02-29 02:31:15.972843681 +0000 UTC m=+279.886639354 (delta=69.143886ms)
	I0229 02:31:16.095954  370051 fix.go:190] guest clock delta is within tolerance: 69.143886ms
	I0229 02:31:16.095962  370051 start.go:83] releasing machines lock for "old-k8s-version-275488", held for 24.208056775s
	I0229 02:31:16.095996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.096336  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:16.099518  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100016  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.100060  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100189  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100751  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100955  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.101035  370051 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:31:16.101084  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.101167  370051 ssh_runner.go:195] Run: cat /version.json
	I0229 02:31:16.101190  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.104588  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.104638  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105000  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105059  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105101  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105335  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105546  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105590  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105821  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105832  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106002  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:16.106028  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106180  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.732828  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.739797  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:15.739827  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.232355  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:16.240421  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:16.240462  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.732451  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:16.740118  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 200:
	ok
	I0229 02:31:16.748529  369869 api_server.go:141] control plane version: v1.28.4
	I0229 02:31:16.748567  369869 api_server.go:131] duration metric: took 4.0165029s to wait for apiserver health ...
	I0229 02:31:16.748580  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:31:16.748588  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:16.750561  369869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:16.194120  370051 ssh_runner.go:195] Run: systemctl --version
	I0229 02:31:16.220808  370051 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:31:16.386082  370051 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:31:16.393419  370051 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:31:16.393512  370051 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:31:16.418966  370051 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:31:16.419003  370051 start.go:475] detecting cgroup driver to use...
	I0229 02:31:16.419087  370051 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:31:16.444372  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:31:16.466354  370051 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:31:16.466430  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:31:16.488710  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:31:16.509561  370051 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:31:16.651716  370051 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:31:16.840453  370051 docker.go:233] disabling docker service ...
	I0229 02:31:16.840538  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:31:16.869611  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:31:16.890123  370051 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:31:17.047701  370051 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:31:17.225457  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:31:17.248553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:31:17.275486  370051 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 02:31:17.275572  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.290350  370051 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:31:17.290437  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.304093  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.320562  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.339790  370051 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:31:17.356570  370051 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:31:17.371208  370051 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:31:17.371303  370051 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:31:17.390748  370051 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:31:17.405750  370051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:31:17.555023  370051 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:31:17.754419  370051 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:31:17.754508  370051 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:31:17.760190  370051 start.go:543] Will wait 60s for crictl version
	I0229 02:31:17.760245  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:17.765195  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:31:17.815839  370051 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:31:17.815953  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.857470  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.896796  370051 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 02:31:13.906892  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:15.907106  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:16.914513  369591 node_ready.go:49] node "no-preload-247751" has status "Ready":"True"
	I0229 02:31:16.914545  369591 node_ready.go:38] duration metric: took 7.511932085s waiting for node "no-preload-247751" to be "Ready" ...
	I0229 02:31:16.914560  369591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:16.925133  369591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.940518  369591 pod_ready.go:92] pod "coredns-76f75df574-2z5w8" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:16.940553  369591 pod_ready.go:81] duration metric: took 15.382701ms waiting for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.940568  369591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.122967  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Start
	I0229 02:31:16.123141  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring networks are active...
	I0229 02:31:16.124019  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring network default is active
	I0229 02:31:16.124630  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring network mk-embed-certs-915633 is active
	I0229 02:31:16.125118  369508 main.go:141] libmachine: (embed-certs-915633) Getting domain xml...
	I0229 02:31:16.126026  369508 main.go:141] libmachine: (embed-certs-915633) Creating domain...
	I0229 02:31:17.664537  369508 main.go:141] libmachine: (embed-certs-915633) Waiting to get IP...
	I0229 02:31:17.665883  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:17.666462  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:17.666595  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:17.666455  371066 retry.go:31] will retry after 193.172159ms: waiting for machine to come up
	I0229 02:31:17.861043  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:17.861754  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:17.861781  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:17.861651  371066 retry.go:31] will retry after 298.133474ms: waiting for machine to come up
	I0229 02:31:18.161304  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:18.161851  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:18.161886  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:18.161818  371066 retry.go:31] will retry after 402.680342ms: waiting for machine to come up
	I0229 02:31:18.566482  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:18.567145  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:18.567165  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:18.567068  371066 retry.go:31] will retry after 536.886613ms: waiting for machine to come up
	I0229 02:31:19.106090  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:19.106797  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:19.106823  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:19.106714  371066 retry.go:31] will retry after 583.032631ms: waiting for machine to come up
	I0229 02:31:19.691531  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:19.692096  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:19.692127  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:19.692000  371066 retry.go:31] will retry after 780.156818ms: waiting for machine to come up
	I0229 02:31:16.752375  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:31:16.783785  369869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:31:16.816646  369869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:31:16.829430  369869 system_pods.go:59] 8 kube-system pods found
	I0229 02:31:16.829480  369869 system_pods.go:61] "coredns-5dd5756b68-652db" [d989183e-dc0d-4913-8eab-fdfac0cf7ad7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:31:16.829491  369869 system_pods.go:61] "etcd-default-k8s-diff-port-071485" [aba29f47-cf0e-4ee5-8d18-7647b36369e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:31:16.829501  369869 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071485" [26a426b2-d5b7-456e-a733-3317009974ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:31:16.829517  369869 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071485" [a896f9fa-991f-44bb-bd97-02fac3494eea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:31:16.829528  369869 system_pods.go:61] "kube-proxy-g976s" [bc750be0-ae2b-4033-b65b-f1cccaebf32f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:31:16.829536  369869 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071485" [d99d25bf-25f4-4057-aedb-fc5ba797af47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:31:16.829544  369869 system_pods.go:61] "metrics-server-57f55c9bc5-86frx" [0ad81c0d-3f9a-45d8-93d8-bbb9e276d5b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:31:16.829560  369869 system_pods.go:61] "storage-provisioner" [92683c3e-04c1-4cef-988d-3b8beb7d4399] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:31:16.829570  369869 system_pods.go:74] duration metric: took 12.896339ms to wait for pod list to return data ...
	I0229 02:31:16.829584  369869 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:31:16.837494  369869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:31:16.837524  369869 node_conditions.go:123] node cpu capacity is 2
	I0229 02:31:16.837535  369869 node_conditions.go:105] duration metric: took 7.942051ms to run NodePressure ...
	I0229 02:31:16.837560  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:17.293873  369869 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:31:17.300874  369869 kubeadm.go:787] kubelet initialised
	I0229 02:31:17.300907  369869 kubeadm.go:788] duration metric: took 7.00259ms waiting for restarted kubelet to initialise ...
	I0229 02:31:17.300919  369869 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:17.315838  369869 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-652db" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.328228  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "coredns-5dd5756b68-652db" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.328265  369869 pod_ready.go:81] duration metric: took 12.396088ms waiting for pod "coredns-5dd5756b68-652db" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.328278  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "coredns-5dd5756b68-652db" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.328287  369869 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.335458  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.335487  369869 pod_ready.go:81] duration metric: took 7.145351ms waiting for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.335497  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.335505  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.356278  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.356365  369869 pod_ready.go:81] duration metric: took 20.849982ms waiting for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.356385  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.356396  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:19.376170  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:17.898162  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:17.901332  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.901809  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:17.901840  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.902046  370051 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:31:17.907256  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:17.924135  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:31:17.924218  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:17.986923  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:17.986992  370051 ssh_runner.go:195] Run: which lz4
	I0229 02:31:17.992110  370051 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:31:17.997252  370051 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:31:17.997287  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 02:31:20.124958  370051 crio.go:444] Took 2.132885 seconds to copy over tarball
	I0229 02:31:20.125075  370051 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:31:18.948383  369591 pod_ready.go:102] pod "etcd-no-preload-247751" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:20.950330  369591 pod_ready.go:92] pod "etcd-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:20.950359  369591 pod_ready.go:81] duration metric: took 4.009782336s waiting for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:20.950372  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.460878  369591 pod_ready.go:92] pod "kube-apiserver-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.460907  369591 pod_ready.go:81] duration metric: took 1.510525429s waiting for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.460922  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.468463  369591 pod_ready.go:92] pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.468487  369591 pod_ready.go:81] duration metric: took 7.556807ms waiting for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.468497  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.476459  369591 pod_ready.go:92] pod "kube-proxy-cdc4l" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.476488  369591 pod_ready.go:81] duration metric: took 7.983254ms waiting for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.476501  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.482564  369591 pod_ready.go:92] pod "kube-scheduler-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.482589  369591 pod_ready.go:81] duration metric: took 6.080532ms waiting for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.482598  369591 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:20.474186  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:20.474741  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:20.474784  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:20.474647  371066 retry.go:31] will retry after 845.550951ms: waiting for machine to come up
	I0229 02:31:21.322246  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:21.323007  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:21.323031  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:21.322935  371066 retry.go:31] will retry after 1.085864892s: waiting for machine to come up
	I0229 02:31:22.410244  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:22.410735  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:22.410766  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:22.410687  371066 retry.go:31] will retry after 1.587558593s: waiting for machine to come up
	I0229 02:31:24.000303  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:24.000914  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:24.000944  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:24.000828  371066 retry.go:31] will retry after 2.058374822s: waiting for machine to come up
	I0229 02:31:21.867552  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:23.972250  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:23.981829  369869 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:23.981860  369869 pod_ready.go:81] duration metric: took 6.625453699s waiting for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.981875  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g976s" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.994568  369869 pod_ready.go:92] pod "kube-proxy-g976s" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:23.994597  369869 pod_ready.go:81] duration metric: took 12.712769ms waiting for pod "kube-proxy-g976s" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.994609  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:24.002085  369869 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:24.002110  369869 pod_ready.go:81] duration metric: took 7.492788ms waiting for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:24.002133  369869 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.625489  370051 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.500380961s)
	I0229 02:31:23.625526  370051 crio.go:451] Took 3.500531 seconds to extract the tarball
	I0229 02:31:23.625536  370051 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:31:23.671458  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:23.714048  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:23.714087  370051 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:31:23.714189  370051 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.714213  370051 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.714309  370051 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 02:31:23.714424  370051 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.714269  370051 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.714461  370051 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.714519  370051 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.714192  370051 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.716086  370051 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.716076  370051 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716088  370051 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.716143  370051 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.716081  370051 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.716275  370051 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 02:31:23.838722  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.844569  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 02:31:23.853089  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.857738  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.864060  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.865519  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.926256  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.997349  370051 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 02:31:23.997401  370051 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.997463  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.010625  370051 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 02:31:24.010674  370051 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 02:31:24.010722  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083140  370051 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 02:31:24.083203  370051 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 02:31:24.083232  370051 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 02:31:24.083247  370051 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.083266  370051 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.083269  370051 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.083308  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083319  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083364  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083166  370051 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 02:31:24.083426  370051 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.083471  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123878  370051 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 02:31:24.123928  370051 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.123972  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123982  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:24.123973  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 02:31:24.124043  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.124051  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.124097  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.124153  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.152226  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.270585  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 02:31:24.305436  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 02:31:24.305532  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 02:31:24.305621  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 02:31:24.305629  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 02:31:24.305799  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 02:31:24.316950  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 02:31:24.635837  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:24.791670  370051 cache_images.go:92] LoadImages completed in 1.077558745s
	W0229 02:31:24.791798  370051 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0229 02:31:24.791902  370051 ssh_runner.go:195] Run: crio config
	I0229 02:31:24.851132  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:31:24.851164  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:24.851189  370051 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:31:24.851213  370051 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275488 NodeName:old-k8s-version-275488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 02:31:24.851423  370051 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-275488
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.160:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:31:24.851524  370051 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275488 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:31:24.851598  370051 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 02:31:24.864237  370051 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:31:24.864330  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:31:24.879552  370051 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 02:31:24.901027  370051 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:31:24.920638  370051 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 02:31:24.941894  370051 ssh_runner.go:195] Run: grep 192.168.39.160	control-plane.minikube.internal$ /etc/hosts
	I0229 02:31:24.947439  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:24.962396  370051 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488 for IP: 192.168.39.160
	I0229 02:31:24.962435  370051 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:24.962621  370051 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:31:24.962673  370051 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:31:24.962781  370051 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/client.key
	I0229 02:31:24.962851  370051 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key.80b25619
	I0229 02:31:24.962919  370051 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key
	I0229 02:31:24.963087  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:31:24.963126  370051 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:31:24.963138  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:31:24.963185  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:31:24.963213  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:31:24.963245  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:31:24.963296  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:24.963980  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:31:24.996049  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:31:25.030503  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:31:25.057695  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:31:25.091982  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:31:25.126636  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:31:25.156613  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:31:25.186480  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:31:25.221012  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:31:25.254122  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:31:25.282646  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:31:25.312624  370051 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:31:25.335020  370051 ssh_runner.go:195] Run: openssl version
	I0229 02:31:25.342920  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:31:25.355808  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361349  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361433  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.368335  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:31:25.380799  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:31:25.393069  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398466  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398539  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.404776  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:31:25.416735  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:31:25.428884  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434503  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434584  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.441187  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:31:25.453174  370051 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:31:25.458712  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:31:25.466032  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:31:25.473895  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:31:25.482948  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:31:25.491808  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:31:25.499003  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:31:25.506691  370051 kubeadm.go:404] StartCluster: {Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:25.506829  370051 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:31:25.506883  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:25.551867  370051 cri.go:89] found id: ""
	I0229 02:31:25.551970  370051 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:31:25.564446  370051 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:31:25.564476  370051 kubeadm.go:636] restartCluster start
	I0229 02:31:25.564545  370051 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:31:25.576275  370051 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:25.577406  370051 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-275488" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:31:25.578043  370051 kubeconfig.go:146] "old-k8s-version-275488" context is missing from /home/jenkins/minikube-integration/18063-316644/kubeconfig - will repair!
	I0229 02:31:25.578979  370051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:25.580805  370051 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:31:25.592154  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:25.592259  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:25.609268  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:26.092701  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.092827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.108636  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:24.491508  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.492827  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:28.496040  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.062093  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:26.062582  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:26.062612  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:26.062525  371066 retry.go:31] will retry after 2.231071357s: waiting for machine to come up
	I0229 02:31:28.295693  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:28.296180  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:28.296214  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:28.296116  371066 retry.go:31] will retry after 2.376277578s: waiting for machine to come up
	I0229 02:31:26.010834  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:28.031628  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.592320  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.592412  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.606907  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.092891  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.093028  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.112353  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.592956  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.593058  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.612315  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.092611  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.092729  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.108095  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.592592  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.592679  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.612145  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.092605  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.092720  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.113807  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.593002  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.593085  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.609337  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.092667  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.092757  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.112800  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.592328  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.592415  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.610909  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:31.092418  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.092529  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.109490  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.990551  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:32.990785  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:30.675432  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:30.675962  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:30.675995  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:30.675901  371066 retry.go:31] will retry after 4.442717853s: waiting for machine to come up
	I0229 02:31:30.511576  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:32.515611  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:35.010325  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:31.593046  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.593128  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.608148  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.092187  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.092299  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.107573  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.593184  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.593312  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.607993  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.092500  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.092603  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.107359  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.592987  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.593101  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.608041  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.092919  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.093023  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.107597  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.593200  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.593295  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.608100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.092589  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.092683  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.107100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.592815  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.592928  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.610879  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.610916  370051 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:35.610930  370051 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:35.610947  370051 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:35.611032  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:35.660059  370051 cri.go:89] found id: ""
	I0229 02:31:35.660146  370051 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:35.682067  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:35.694455  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:35.694542  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707118  370051 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707149  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:35.834811  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:35.123364  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.123906  369508 main.go:141] libmachine: (embed-certs-915633) Found IP for machine: 192.168.50.218
	I0229 02:31:35.123925  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has current primary IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.123931  369508 main.go:141] libmachine: (embed-certs-915633) Reserving static IP address...
	I0229 02:31:35.124398  369508 main.go:141] libmachine: (embed-certs-915633) Reserved static IP address: 192.168.50.218
	I0229 02:31:35.124423  369508 main.go:141] libmachine: (embed-certs-915633) Waiting for SSH to be available...
	I0229 02:31:35.124441  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "embed-certs-915633", mac: "52:54:00:26:ca:ce", ip: "192.168.50.218"} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.124468  369508 main.go:141] libmachine: (embed-certs-915633) DBG | skip adding static IP to network mk-embed-certs-915633 - found existing host DHCP lease matching {name: "embed-certs-915633", mac: "52:54:00:26:ca:ce", ip: "192.168.50.218"}
	I0229 02:31:35.124487  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Getting to WaitForSSH function...
	I0229 02:31:35.126676  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.127004  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.127035  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.127137  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Using SSH client type: external
	I0229 02:31:35.127168  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa (-rw-------)
	I0229 02:31:35.127199  369508 main.go:141] libmachine: (embed-certs-915633) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:31:35.127213  369508 main.go:141] libmachine: (embed-certs-915633) DBG | About to run SSH command:
	I0229 02:31:35.127224  369508 main.go:141] libmachine: (embed-certs-915633) DBG | exit 0
	I0229 02:31:35.251075  369508 main.go:141] libmachine: (embed-certs-915633) DBG | SSH cmd err, output: <nil>: 
	I0229 02:31:35.251474  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetConfigRaw
	I0229 02:31:35.252256  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:35.254934  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.255350  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.255378  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.255676  369508 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/config.json ...
	I0229 02:31:35.255881  369508 machine.go:88] provisioning docker machine ...
	I0229 02:31:35.255905  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:35.256154  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.256344  369508 buildroot.go:166] provisioning hostname "embed-certs-915633"
	I0229 02:31:35.256369  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.256506  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.258794  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.259163  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.259186  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.259337  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.259551  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.259716  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.259875  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.260066  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.260256  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.260269  369508 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-915633 && echo "embed-certs-915633" | sudo tee /etc/hostname
	I0229 02:31:35.383734  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-915633
	
	I0229 02:31:35.383770  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.386559  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.386913  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.386944  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.387121  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.387359  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.387631  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.387815  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.387979  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.388158  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.388175  369508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-915633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-915633/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-915633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:35.521449  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:35.521490  369508 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:31:35.521530  369508 buildroot.go:174] setting up certificates
	I0229 02:31:35.521544  369508 provision.go:83] configureAuth start
	I0229 02:31:35.521573  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.521923  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:35.524829  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.525193  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.525217  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.525348  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.527582  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.527980  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.528012  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.528164  369508 provision.go:138] copyHostCerts
	I0229 02:31:35.528216  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:31:35.528234  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:31:35.528290  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:31:35.528384  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:31:35.528396  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:31:35.528415  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:31:35.528514  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:31:35.528525  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:31:35.528544  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:31:35.528591  369508 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.embed-certs-915633 san=[192.168.50.218 192.168.50.218 localhost 127.0.0.1 minikube embed-certs-915633]
	I0229 02:31:35.778616  369508 provision.go:172] copyRemoteCerts
	I0229 02:31:35.778679  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:31:35.778706  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.782134  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.782605  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.782640  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.782833  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.783103  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.783305  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.783522  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:35.870506  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:31:35.904595  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:31:35.936515  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:31:35.966505  369508 provision.go:86] duration metric: configureAuth took 444.939951ms
	I0229 02:31:35.966539  369508 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:31:35.966725  369508 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:31:35.966831  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.969731  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.970133  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.970176  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.970402  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.970623  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.970788  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.970968  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.971139  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.971382  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.971401  369508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:31:36.262676  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:31:36.262719  369508 machine.go:91] provisioned docker machine in 1.00682197s
	I0229 02:31:36.262731  369508 start.go:300] post-start starting for "embed-certs-915633" (driver="kvm2")
	I0229 02:31:36.262743  369508 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:31:36.262765  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.263140  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:31:36.263179  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.265718  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.266095  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.266126  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.266278  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.266486  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.266658  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.266795  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.359474  369508 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:31:36.365071  369508 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:31:36.365110  369508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:31:36.365202  369508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:31:36.365279  369508 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:31:36.365395  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:31:36.376823  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:36.406525  369508 start.go:303] post-start completed in 143.75518ms
	I0229 02:31:36.406588  369508 fix.go:56] fixHost completed within 20.310442727s
	I0229 02:31:36.406619  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.409415  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.409840  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.409875  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.410009  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.410214  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.410412  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.410567  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.410715  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:36.410936  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:36.410950  369508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:31:36.520508  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173896.494400897
	
	I0229 02:31:36.520543  369508 fix.go:206] guest clock: 1709173896.494400897
	I0229 02:31:36.520555  369508 fix.go:219] Guest: 2024-02-29 02:31:36.494400897 +0000 UTC Remote: 2024-02-29 02:31:36.406594326 +0000 UTC m=+361.755087901 (delta=87.806571ms)
	I0229 02:31:36.520584  369508 fix.go:190] guest clock delta is within tolerance: 87.806571ms
	I0229 02:31:36.520597  369508 start.go:83] releasing machines lock for "embed-certs-915633", held for 20.424490067s
	I0229 02:31:36.520629  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.520949  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:36.523819  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.524146  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.524185  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.524359  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.524912  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.525109  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.525206  369508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:31:36.525251  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.525332  369508 ssh_runner.go:195] Run: cat /version.json
	I0229 02:31:36.525360  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.528265  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528470  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528614  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.528641  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528826  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.528829  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.528855  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.529047  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.529135  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.529253  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.529321  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.529414  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.529478  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.529556  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.611757  369508 ssh_runner.go:195] Run: systemctl --version
	I0229 02:31:36.638875  369508 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:31:36.786219  369508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:31:36.798964  369508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:31:36.799056  369508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:31:36.817942  369508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:31:36.817975  369508 start.go:475] detecting cgroup driver to use...
	I0229 02:31:36.818086  369508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:31:36.837019  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:31:36.855078  369508 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:31:36.855159  369508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:31:36.873444  369508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:31:36.891708  369508 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:31:37.031928  369508 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:31:37.212859  369508 docker.go:233] disabling docker service ...
	I0229 02:31:37.212960  369508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:31:37.235232  369508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:31:37.253901  369508 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:31:37.401366  369508 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:31:37.530791  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:31:37.547864  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:31:37.570344  369508 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:31:37.570416  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.582275  369508 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:31:37.582345  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.593628  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.605168  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.616567  369508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:31:37.628153  369508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:31:37.638579  369508 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:31:37.638640  369508 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:31:37.652738  369508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:31:37.664118  369508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:31:37.785330  369508 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:31:37.933006  369508 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:31:37.933095  369508 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:31:37.938625  369508 start.go:543] Will wait 60s for crictl version
	I0229 02:31:37.938702  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:31:37.943285  369508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:31:37.984992  369508 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:31:37.985105  369508 ssh_runner.go:195] Run: crio --version
	I0229 02:31:38.018467  369508 ssh_runner.go:195] Run: crio --version
	I0229 02:31:38.051472  369508 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 02:31:34.991345  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:36.991987  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:38.052850  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:38.055688  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:38.055970  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:38.056006  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:38.056253  369508 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 02:31:38.060925  369508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:38.076126  369508 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:31:38.076197  369508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:38.116261  369508 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 02:31:38.116372  369508 ssh_runner.go:195] Run: which lz4
	I0229 02:31:38.121080  369508 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:31:38.125711  369508 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:31:38.125755  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 02:31:37.012008  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:39.018348  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:36.790885  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.042778  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.130251  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.215289  370051 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:37.215384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:37.715589  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.715938  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.215781  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.716505  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.216238  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.716182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.992988  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:41.491712  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:43.492458  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:40.139859  369508 crio.go:444] Took 2.018817 seconds to copy over tarball
	I0229 02:31:40.139953  369508 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:31:43.071745  369508 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.931752333s)
	I0229 02:31:43.071797  369508 crio.go:451] Took 2.931905 seconds to extract the tarball
	I0229 02:31:43.071809  369508 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:31:43.118127  369508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:43.171147  369508 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:31:43.171176  369508 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:31:43.171262  369508 ssh_runner.go:195] Run: crio config
	I0229 02:31:43.232177  369508 cni.go:84] Creating CNI manager for ""
	I0229 02:31:43.232203  369508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:43.232229  369508 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:31:43.232247  369508 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-915633 NodeName:embed-certs-915633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:31:43.232419  369508 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-915633"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:31:43.232519  369508 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-915633 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-915633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:31:43.232596  369508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:31:43.244392  369508 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:31:43.244467  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:31:43.256293  369508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0229 02:31:43.275397  369508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:31:43.295494  369508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0229 02:31:43.316812  369508 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I0229 02:31:43.321496  369508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:43.335055  369508 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633 for IP: 192.168.50.218
	I0229 02:31:43.335092  369508 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:43.335270  369508 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:31:43.335316  369508 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:31:43.335388  369508 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/client.key
	I0229 02:31:43.335442  369508 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.key.cc0da009
	I0229 02:31:43.335475  369508 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.key
	I0229 02:31:43.335584  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:31:43.335610  369508 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:31:43.335619  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:31:43.335642  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:31:43.335673  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:31:43.335710  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:31:43.335779  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:43.336455  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:31:43.364985  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:31:43.394189  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:31:43.424515  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:31:43.456589  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:31:43.486396  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:31:43.516931  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:31:43.546421  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:31:43.578923  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:31:43.608333  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:31:43.637196  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:31:43.667522  369508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:31:43.688266  369508 ssh_runner.go:195] Run: openssl version
	I0229 02:31:43.695616  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:31:43.709892  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.715346  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.715426  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.722688  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:31:43.735866  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:31:43.749967  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.757599  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.757671  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.765157  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:31:43.779671  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:31:43.792900  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.798505  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.798576  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.805192  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:31:43.818233  369508 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:31:43.823681  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:31:43.831016  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:31:43.837899  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:31:43.844802  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:31:43.851881  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:31:43.858689  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:31:43.865749  369508 kubeadm.go:404] StartCluster: {Name:embed-certs-915633 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-915633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:43.865852  369508 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:31:43.865925  369508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:43.906012  369508 cri.go:89] found id: ""
	I0229 02:31:43.906116  369508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:31:43.918241  369508 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:31:43.918265  369508 kubeadm.go:636] restartCluster start
	I0229 02:31:43.918349  369508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:31:43.930524  369508 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:43.931550  369508 kubeconfig.go:92] found "embed-certs-915633" server: "https://192.168.50.218:8443"
	I0229 02:31:43.933612  369508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:31:43.944469  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:43.944519  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:43.958194  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:44.444746  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:44.444840  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:44.458567  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:41.510364  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:43.511424  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:41.216236  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:41.716082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.215537  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.715524  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.215873  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.715634  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.216464  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.715519  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.216430  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.716196  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.990995  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:48.489390  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:44.944934  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.003707  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.018797  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:45.445348  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.445435  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.460199  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:45.944750  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.944879  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.959309  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.445218  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:46.445313  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:46.459195  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.945456  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:46.945538  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:46.959212  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:47.444711  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:47.444819  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:47.459189  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:47.944651  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:47.944726  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:47.958733  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:48.445008  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:48.445100  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:48.460126  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:48.944649  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:48.944731  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:48.959993  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:49.444545  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:49.444628  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:49.458889  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.011657  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:48.508465  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:46.215715  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:46.715657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.216495  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.715491  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.215459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.715556  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.215675  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.716046  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.215993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.715594  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.489578  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:52.990638  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:49.945108  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:49.945265  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:49.960625  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:50.444843  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:50.444923  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:50.459329  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:50.944871  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:50.944963  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:50.959583  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:51.444601  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:51.444704  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:51.462037  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:51.944573  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:51.944658  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:51.958538  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:52.445111  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:52.445269  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:52.462902  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:52.945088  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:52.945182  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:52.960241  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.444649  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:53.444738  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:53.458642  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.945214  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:53.945291  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:53.960552  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.960588  369508 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:53.960600  369508 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:53.960615  369508 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:53.960671  369508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:54.005230  369508 cri.go:89] found id: ""
	I0229 02:31:54.005321  369508 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:54.027544  369508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:54.040517  369508 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:54.040577  369508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:54.051200  369508 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:54.051223  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:54.168817  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:50.509119  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:52.509526  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:54.511540  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:51.215927  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:51.715888  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.215659  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.715769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.216175  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.715755  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.216468  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.216280  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.715924  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.992721  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:57.490570  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:55.091652  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.346578  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.443373  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.542444  369508 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:55.542562  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.042870  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.542972  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.571776  369508 api_server.go:72] duration metric: took 1.029332492s to wait for apiserver process to appear ...
	I0229 02:31:56.571808  369508 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:56.571831  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:56.572606  369508 api_server.go:269] stopped: https://192.168.50.218:8443/healthz: Get "https://192.168.50.218:8443/healthz": dial tcp 192.168.50.218:8443: connect: connection refused
	I0229 02:31:57.072145  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.557011  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:59.557048  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:59.557066  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.609944  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:59.610010  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:59.610028  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.669911  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:59.669955  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:57.010655  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:59.510097  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:00.071971  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:00.084661  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:32:00.084690  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:32:00.572262  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:00.577772  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:32:00.577807  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:32:01.072371  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:01.077306  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0229 02:32:01.084492  369508 api_server.go:141] control plane version: v1.28.4
	I0229 02:32:01.084531  369508 api_server.go:131] duration metric: took 4.512702749s to wait for apiserver health ...
	I0229 02:32:01.084544  369508 cni.go:84] Creating CNI manager for ""
	I0229 02:32:01.084554  369508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:32:01.086337  369508 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:56.215653  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.715898  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.215954  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.216366  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.716093  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.215944  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.715553  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.216341  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.715677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.087584  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:32:01.099724  369508 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:32:01.122381  369508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:32:01.133632  369508 system_pods.go:59] 8 kube-system pods found
	I0229 02:32:01.133674  369508 system_pods.go:61] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:32:01.133684  369508 system_pods.go:61] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:32:01.133697  369508 system_pods.go:61] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:32:01.133710  369508 system_pods.go:61] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:32:01.133720  369508 system_pods.go:61] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:32:01.133728  369508 system_pods.go:61] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:32:01.133738  369508 system_pods.go:61] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:32:01.133746  369508 system_pods.go:61] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:32:01.133755  369508 system_pods.go:74] duration metric: took 11.346225ms to wait for pod list to return data ...
	I0229 02:32:01.133767  369508 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:32:01.138716  369508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:32:01.138746  369508 node_conditions.go:123] node cpu capacity is 2
	I0229 02:32:01.138760  369508 node_conditions.go:105] duration metric: took 4.985648ms to run NodePressure ...
	I0229 02:32:01.138783  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:32:01.368503  369508 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:32:01.373648  369508 kubeadm.go:787] kubelet initialised
	I0229 02:32:01.373669  369508 kubeadm.go:788] duration metric: took 5.137378ms waiting for restarted kubelet to initialise ...
	I0229 02:32:01.373677  369508 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:01.379649  369508 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.384724  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.384750  369508 pod_ready.go:81] duration metric: took 5.071017ms waiting for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.384758  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.384765  369508 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.390019  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "etcd-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.390048  369508 pod_ready.go:81] duration metric: took 5.27491ms waiting for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.390059  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "etcd-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.390067  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.396275  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.396294  369508 pod_ready.go:81] duration metric: took 6.218856ms waiting for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.396302  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.396307  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.525881  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.525914  369508 pod_ready.go:81] duration metric: took 129.596783ms waiting for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.525927  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.525935  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.926806  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-proxy-6tt7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.926843  369508 pod_ready.go:81] duration metric: took 400.889304ms waiting for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.926856  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-proxy-6tt7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.926864  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:02.326588  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.326621  369508 pod_ready.go:81] duration metric: took 399.74816ms waiting for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:02.326633  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.326639  369508 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:02.727730  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.727759  369508 pod_ready.go:81] duration metric: took 401.108694ms waiting for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:02.727769  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.727776  369508 pod_ready.go:38] duration metric: took 1.354090438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:02.727795  369508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:32:02.742069  369508 ops.go:34] apiserver oom_adj: -16
	I0229 02:32:02.742097  369508 kubeadm.go:640] restartCluster took 18.823823408s
	I0229 02:32:02.742107  369508 kubeadm.go:406] StartCluster complete in 18.876382148s
	I0229 02:32:02.742127  369508 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:02.742271  369508 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:32:02.744032  369508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:02.744292  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:32:02.744429  369508 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:32:02.744507  369508 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-915633"
	I0229 02:32:02.744526  369508 addons.go:69] Setting default-storageclass=true in profile "embed-certs-915633"
	I0229 02:32:02.744540  369508 addons.go:69] Setting metrics-server=true in profile "embed-certs-915633"
	I0229 02:32:02.744550  369508 addons.go:234] Setting addon metrics-server=true in "embed-certs-915633"
	I0229 02:32:02.744555  369508 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-915633"
	W0229 02:32:02.744558  369508 addons.go:243] addon metrics-server should already be in state true
	I0229 02:32:02.744619  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.744532  369508 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-915633"
	W0229 02:32:02.744735  369508 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:32:02.744853  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.744682  369508 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:32:02.745085  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745113  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.745121  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745175  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.745339  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745416  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.749865  369508 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-915633" context rescaled to 1 replicas
	I0229 02:32:02.749907  369508 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:32:02.751823  369508 out.go:177] * Verifying Kubernetes components...
	I0229 02:32:02.753296  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:32:02.762688  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0229 02:32:02.763050  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0229 02:32:02.763274  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.763693  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.763872  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.763895  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.763963  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0229 02:32:02.764307  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.764337  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.764554  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.764592  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.764665  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.765103  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.765135  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.765144  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.765170  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.765481  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.765495  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.765863  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.766129  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.769253  369508 addons.go:234] Setting addon default-storageclass=true in "embed-certs-915633"
	W0229 02:32:02.769274  369508 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:32:02.769295  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.769578  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.769607  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.787345  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35577
	I0229 02:32:02.787806  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.788243  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.788266  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.789755  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33629
	I0229 02:32:02.790272  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.790361  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0229 02:32:02.790634  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.790727  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.791027  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.791192  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.791206  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.791367  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.791402  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.791705  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.791924  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.792315  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.792987  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.793026  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.793278  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.795128  369508 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:32:02.794105  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.796451  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:32:02.796472  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:32:02.796496  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.797812  369508 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:59.493919  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:01.989683  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:02.799249  369508 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:32:02.799270  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:32:02.799289  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.800109  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.800960  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.801015  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.801300  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.801496  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.801635  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.801763  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.802278  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.802617  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.802645  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.802836  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.803026  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.803174  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.803390  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.818656  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I0229 02:32:02.819105  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.819606  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.819625  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.820022  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.820366  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.822054  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.822412  369508 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:32:02.822432  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:32:02.822451  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.825579  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.826260  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.826293  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.826463  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.826614  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.826761  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.826954  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.911316  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:32:02.945655  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:32:02.945683  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:32:02.981318  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:32:02.981352  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:32:02.983632  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:32:03.009561  369508 node_ready.go:35] waiting up to 6m0s for node "embed-certs-915633" to be "Ready" ...
	I0229 02:32:03.009586  369508 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:32:03.044265  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:32:03.044293  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:32:03.094073  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:32:04.287008  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.3033415s)
	I0229 02:32:04.287081  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287094  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287375  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.37602435s)
	I0229 02:32:04.287416  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287428  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287440  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287463  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287478  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287487  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287750  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287800  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287828  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287861  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287805  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287914  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287834  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.287774  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.289370  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.289377  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.289397  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.293892  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.293919  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.294180  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.294198  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.294212  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.376595  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.28244915s)
	I0229 02:32:04.376679  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.376710  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.377004  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.377022  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.377031  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.377039  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.377037  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.377275  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.377319  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.377331  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.377348  369508 addons.go:470] Verifying addon metrics-server=true in "embed-certs-915633"
	I0229 02:32:04.380194  369508 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:32:04.381510  369508 addons.go:505] enable addons completed in 1.637082823s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:32:02.010578  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:04.509975  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:01.216197  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.716302  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.216170  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.715615  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.216580  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.716088  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.215743  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.716142  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.216543  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.715853  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.991440  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:05.992389  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:08.491225  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:05.014879  369508 node_ready.go:58] node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:07.518854  369508 node_ready.go:58] node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:07.009085  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:09.009296  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:06.216206  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:06.715748  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.215964  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.716419  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.216034  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.715611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.216207  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.716408  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.216144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.716454  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.491751  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:12.991326  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:10.013574  369508 node_ready.go:49] node "embed-certs-915633" has status "Ready":"True"
	I0229 02:32:10.013605  369508 node_ready.go:38] duration metric: took 7.004009102s waiting for node "embed-certs-915633" to be "Ready" ...
	I0229 02:32:10.013617  369508 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:10.020332  369508 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.025740  369508 pod_ready.go:92] pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:10.025766  369508 pod_ready.go:81] duration metric: took 5.403764ms waiting for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.025778  369508 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.534182  369508 pod_ready.go:92] pod "etcd-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:10.534212  369508 pod_ready.go:81] duration metric: took 508.426559ms waiting for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.534238  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:11.048997  369508 pod_ready.go:92] pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:11.049027  369508 pod_ready.go:81] duration metric: took 514.780048ms waiting for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:11.049040  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:13.056477  369508 pod_ready.go:102] pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:11.010305  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:13.011477  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:11.215611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:11.716198  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.216332  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.716413  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.216407  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.716466  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.216182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.716285  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.215995  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.715613  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.491511  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:17.494485  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:15.056064  369508 pod_ready.go:92] pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.056093  369508 pod_ready.go:81] duration metric: took 4.007044542s waiting for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.056104  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.061418  369508 pod_ready.go:92] pod "kube-proxy-6tt7l" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.061440  369508 pod_ready.go:81] duration metric: took 5.329971ms waiting for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.061451  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.578305  369508 pod_ready.go:92] pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.578332  369508 pod_ready.go:81] duration metric: took 516.873281ms waiting for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.578341  369508 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:17.585624  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:19.586470  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:15.510630  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:18.010381  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:16.215530  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:16.716420  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.216031  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.716303  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.216082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.715523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.216166  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.716503  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.215680  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.715770  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.989766  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.989821  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.586820  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:23.587119  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:20.509895  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:23.010371  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.215523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:21.715617  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.216133  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.716029  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.216141  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.715578  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.215640  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.715601  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.215959  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.716394  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.990493  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:25.990911  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:28.489681  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:26.085933  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:28.086754  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:25.508765  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:27.508956  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:29.512409  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:26.215946  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:26.715834  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.216243  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.715581  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.215521  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.715849  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.716497  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.215657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.715492  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.490400  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:32.990250  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:30.586107  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:33.086852  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:31.518170  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:34.009514  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:31.216322  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:31.716160  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.215557  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.715618  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.215761  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.716216  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.216460  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.716244  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.215551  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.715633  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.990305  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.990956  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:35.585472  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:37.586652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.509112  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:38.509634  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.215910  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:36.716307  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:37.216308  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:37.216404  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:37.262324  370051 cri.go:89] found id: ""
	I0229 02:32:37.262358  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.262370  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:37.262378  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:37.262442  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:37.303758  370051 cri.go:89] found id: ""
	I0229 02:32:37.303790  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.303802  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:37.303809  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:37.303880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:37.349512  370051 cri.go:89] found id: ""
	I0229 02:32:37.349538  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.349546  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:37.349553  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:37.349607  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:37.389630  370051 cri.go:89] found id: ""
	I0229 02:32:37.389657  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.389668  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:37.389676  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:37.389752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:37.435918  370051 cri.go:89] found id: ""
	I0229 02:32:37.435954  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.435967  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:37.435976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:37.436044  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:37.479336  370051 cri.go:89] found id: ""
	I0229 02:32:37.479369  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.479377  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:37.479384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:37.479460  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:37.519944  370051 cri.go:89] found id: ""
	I0229 02:32:37.519979  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.519991  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:37.519999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:37.520071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:37.563848  370051 cri.go:89] found id: ""
	I0229 02:32:37.563875  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.563884  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:37.563895  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:37.563915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:37.607989  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:37.608025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:37.660272  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:37.660324  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:37.676878  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:37.676909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:37.805099  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:37.805132  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:37.805159  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.378467  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:40.393066  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:40.393221  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:40.432592  370051 cri.go:89] found id: ""
	I0229 02:32:40.432619  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.432628  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:40.432634  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:40.432693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:40.473651  370051 cri.go:89] found id: ""
	I0229 02:32:40.473706  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.473716  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:40.473722  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:40.473781  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:40.520262  370051 cri.go:89] found id: ""
	I0229 02:32:40.520292  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.520303  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:40.520312  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:40.520374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:40.560359  370051 cri.go:89] found id: ""
	I0229 02:32:40.560393  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.560402  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:40.560408  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:40.560474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:40.602145  370051 cri.go:89] found id: ""
	I0229 02:32:40.602173  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.602181  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:40.602187  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:40.602266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:40.640744  370051 cri.go:89] found id: ""
	I0229 02:32:40.640778  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.640791  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:40.640799  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:40.640869  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:40.681863  370051 cri.go:89] found id: ""
	I0229 02:32:40.681895  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.681908  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:40.681916  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:40.681985  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:40.725859  370051 cri.go:89] found id: ""
	I0229 02:32:40.725890  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.725899  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:40.725910  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:40.725924  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.794666  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:40.794705  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:40.854173  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:40.854215  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:40.901744  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:40.901786  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:40.925331  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:40.925371  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:41.005785  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:39.491292  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:41.494077  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:40.086540  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:42.584644  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:44.587012  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:41.010764  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:43.510128  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:43.506756  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:43.522038  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:43.522135  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:43.559609  370051 cri.go:89] found id: ""
	I0229 02:32:43.559635  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.559642  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:43.559649  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:43.559707  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:43.609059  370051 cri.go:89] found id: ""
	I0229 02:32:43.609087  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.609096  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:43.609102  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:43.609159  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:43.648988  370051 cri.go:89] found id: ""
	I0229 02:32:43.649018  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.649029  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:43.649037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:43.649104  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:43.690995  370051 cri.go:89] found id: ""
	I0229 02:32:43.691028  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.691042  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:43.691054  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:43.691120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:43.729221  370051 cri.go:89] found id: ""
	I0229 02:32:43.729249  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.729257  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:43.729263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:43.729334  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:43.767141  370051 cri.go:89] found id: ""
	I0229 02:32:43.767174  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.767186  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:43.767194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:43.767266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:43.807926  370051 cri.go:89] found id: ""
	I0229 02:32:43.807962  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.807970  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:43.807976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:43.808029  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:43.857945  370051 cri.go:89] found id: ""
	I0229 02:32:43.857973  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.857981  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:43.857991  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:43.858005  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:43.941290  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:43.941338  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:43.986788  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:43.986823  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:44.037384  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:44.037421  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:44.052668  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:44.052696  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:44.127124  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:43.990179  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:45.990921  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:47.991525  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:47.086821  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:49.585987  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:45.510273  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:48.009067  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:50.011776  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:46.627409  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:46.642306  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:46.642397  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:46.685358  370051 cri.go:89] found id: ""
	I0229 02:32:46.685389  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.685400  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:46.685431  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:46.685493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:46.724996  370051 cri.go:89] found id: ""
	I0229 02:32:46.725026  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.725035  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:46.725041  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:46.725113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:46.765815  370051 cri.go:89] found id: ""
	I0229 02:32:46.765849  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.765857  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:46.765863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:46.765924  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:46.808946  370051 cri.go:89] found id: ""
	I0229 02:32:46.808980  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.808991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:46.809000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:46.809068  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:46.865068  370051 cri.go:89] found id: ""
	I0229 02:32:46.865106  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.865119  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:46.865127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:46.865200  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:46.932233  370051 cri.go:89] found id: ""
	I0229 02:32:46.932260  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.932268  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:46.932275  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:46.932331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:46.985701  370051 cri.go:89] found id: ""
	I0229 02:32:46.985732  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.985744  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:46.985752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:46.985819  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:47.027497  370051 cri.go:89] found id: ""
	I0229 02:32:47.027524  370051 logs.go:276] 0 containers: []
	W0229 02:32:47.027536  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:47.027548  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:47.027565  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:47.075955  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:47.075990  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:47.093922  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:47.093949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:47.165000  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:47.165029  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:47.165046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:47.250161  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:47.250201  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:49.794654  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:49.809706  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:49.809787  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:49.868163  370051 cri.go:89] found id: ""
	I0229 02:32:49.868197  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.868217  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:49.868223  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:49.868277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:49.928462  370051 cri.go:89] found id: ""
	I0229 02:32:49.928495  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.928508  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:49.928516  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:49.928580  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:49.975725  370051 cri.go:89] found id: ""
	I0229 02:32:49.975755  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.975765  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:49.975774  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:49.975849  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:50.017007  370051 cri.go:89] found id: ""
	I0229 02:32:50.017036  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.017046  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:50.017051  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:50.017118  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:50.054522  370051 cri.go:89] found id: ""
	I0229 02:32:50.054551  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.054560  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:50.054566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:50.054620  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:50.096274  370051 cri.go:89] found id: ""
	I0229 02:32:50.096300  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.096308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:50.096319  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:50.096382  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:50.142543  370051 cri.go:89] found id: ""
	I0229 02:32:50.142581  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.142590  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:50.142597  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:50.142667  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:50.182452  370051 cri.go:89] found id: ""
	I0229 02:32:50.182482  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.182492  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:50.182505  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:50.182522  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:50.266311  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:50.266355  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:50.309277  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:50.309322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:50.360492  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:50.360536  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:50.376711  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:50.376744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:50.447128  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:49.992032  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.490801  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:51.586053  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:53.586268  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.510054  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:54.510975  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.947926  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:52.970209  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:52.970317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:53.010840  370051 cri.go:89] found id: ""
	I0229 02:32:53.010868  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.010878  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:53.010886  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:53.010983  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:53.049458  370051 cri.go:89] found id: ""
	I0229 02:32:53.049490  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.049503  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:53.049511  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:53.049578  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:53.088615  370051 cri.go:89] found id: ""
	I0229 02:32:53.088646  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.088656  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:53.088671  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:53.088738  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:53.130176  370051 cri.go:89] found id: ""
	I0229 02:32:53.130210  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.130237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:53.130247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:53.130317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:53.177876  370051 cri.go:89] found id: ""
	I0229 02:32:53.177908  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.177920  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:53.177928  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:53.177991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:53.216036  370051 cri.go:89] found id: ""
	I0229 02:32:53.216065  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.216074  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:53.216080  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:53.216143  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:53.254673  370051 cri.go:89] found id: ""
	I0229 02:32:53.254705  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.254716  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:53.254724  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:53.254785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:53.291508  370051 cri.go:89] found id: ""
	I0229 02:32:53.291539  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.291551  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:53.291564  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:53.291581  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:53.343312  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:53.343354  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:53.359264  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:53.359294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:53.431396  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:53.431428  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:53.431445  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:53.512494  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:53.512529  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:56.057340  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:56.073074  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:56.073158  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:56.111650  370051 cri.go:89] found id: ""
	I0229 02:32:56.111684  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.111704  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:56.111713  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:56.111785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:54.990490  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:56.991005  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:55.587290  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:58.086312  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:57.008288  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:59.011396  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:56.150147  370051 cri.go:89] found id: ""
	I0229 02:32:56.150178  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.150191  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:56.150200  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:56.150280  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:56.192842  370051 cri.go:89] found id: ""
	I0229 02:32:56.192878  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.192890  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:56.192898  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:56.192969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:56.232013  370051 cri.go:89] found id: ""
	I0229 02:32:56.232051  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.232062  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:56.232079  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:56.232151  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:56.273824  370051 cri.go:89] found id: ""
	I0229 02:32:56.273858  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.273871  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:56.273882  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:56.273949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:56.312112  370051 cri.go:89] found id: ""
	I0229 02:32:56.312139  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.312147  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:56.312153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:56.312203  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:56.352558  370051 cri.go:89] found id: ""
	I0229 02:32:56.352585  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.352593  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:56.352600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:56.352666  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:56.397719  370051 cri.go:89] found id: ""
	I0229 02:32:56.397762  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.397775  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:56.397790  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:56.397808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:56.447793  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:56.447831  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:56.463859  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:56.463894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:56.540306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:56.540333  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:56.540347  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:56.633201  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:56.633247  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.207459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:59.222165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:59.222271  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:59.261197  370051 cri.go:89] found id: ""
	I0229 02:32:59.261230  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.261242  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:59.261251  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:59.261338  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:59.300874  370051 cri.go:89] found id: ""
	I0229 02:32:59.300917  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.300940  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:59.300950  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:59.301025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:59.345399  370051 cri.go:89] found id: ""
	I0229 02:32:59.345435  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.345446  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:59.345455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:59.345525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:59.386068  370051 cri.go:89] found id: ""
	I0229 02:32:59.386102  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.386112  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:59.386132  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:59.386184  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:59.436597  370051 cri.go:89] found id: ""
	I0229 02:32:59.436629  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.436641  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:59.436649  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:59.436708  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:59.481417  370051 cri.go:89] found id: ""
	I0229 02:32:59.481446  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.481462  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:59.481469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:59.481535  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:59.527725  370051 cri.go:89] found id: ""
	I0229 02:32:59.527752  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.527763  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:59.527771  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:59.527845  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:59.574502  370051 cri.go:89] found id: ""
	I0229 02:32:59.574535  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.574547  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:59.574561  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:59.574579  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:59.669584  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:59.669630  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.730049  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:59.730096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:59.779562  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:59.779613  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:59.797016  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:59.797046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:59.876438  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:58.991584  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:01.489321  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:03.489615  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:00.585463  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:02.587986  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:04.588479  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:01.509980  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:04.009579  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:02.377144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:02.391585  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:02.391682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:02.432359  370051 cri.go:89] found id: ""
	I0229 02:33:02.432390  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.432399  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:02.432406  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:02.432462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:02.476733  370051 cri.go:89] found id: ""
	I0229 02:33:02.476768  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.476781  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:02.476790  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:02.476856  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:02.521414  370051 cri.go:89] found id: ""
	I0229 02:33:02.521440  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.521448  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:02.521454  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:02.521513  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:02.561663  370051 cri.go:89] found id: ""
	I0229 02:33:02.561690  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.561698  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:02.561704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:02.561755  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:02.611953  370051 cri.go:89] found id: ""
	I0229 02:33:02.611989  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.612002  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:02.612010  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:02.612079  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:02.663254  370051 cri.go:89] found id: ""
	I0229 02:33:02.663282  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.663290  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:02.663297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:02.663348  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:02.721449  370051 cri.go:89] found id: ""
	I0229 02:33:02.721484  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.721497  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:02.721506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:02.721579  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:02.761197  370051 cri.go:89] found id: ""
	I0229 02:33:02.761239  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.761251  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:02.761265  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:02.761282  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:02.810457  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:02.810498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:02.828906  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:02.828940  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:02.911895  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:02.911932  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:02.911945  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:02.995120  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:02.995152  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:05.544629  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:05.559266  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:05.559342  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:05.609673  370051 cri.go:89] found id: ""
	I0229 02:33:05.609706  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.609718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:05.609727  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:05.609795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:05.665161  370051 cri.go:89] found id: ""
	I0229 02:33:05.665192  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.665203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:05.665211  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:05.665282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:05.719923  370051 cri.go:89] found id: ""
	I0229 02:33:05.719949  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.719957  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:05.719963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:05.720025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:05.765189  370051 cri.go:89] found id: ""
	I0229 02:33:05.765224  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.765237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:05.765245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:05.765357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:05.803788  370051 cri.go:89] found id: ""
	I0229 02:33:05.803820  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.803829  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:05.803836  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:05.803909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:05.842152  370051 cri.go:89] found id: ""
	I0229 02:33:05.842178  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.842188  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:05.842197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:05.842278  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:05.885042  370051 cri.go:89] found id: ""
	I0229 02:33:05.885071  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.885084  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:05.885092  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:05.885156  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:05.926032  370051 cri.go:89] found id: ""
	I0229 02:33:05.926069  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.926082  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:05.926096  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:05.926112  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:06.014702  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:06.014744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:06.063510  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:06.063550  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:06.114215  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:06.114272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:06.130132  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:06.130169  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:33:05.490726  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:07.491068  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:07.085225  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:09.087524  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:06.508469  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:08.509399  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	W0229 02:33:06.205692  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:08.706549  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:08.722548  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:08.722614  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:08.768518  370051 cri.go:89] found id: ""
	I0229 02:33:08.768553  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.768564  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:08.768572  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:08.768630  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:08.804600  370051 cri.go:89] found id: ""
	I0229 02:33:08.804630  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.804643  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:08.804651  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:08.804721  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:08.842466  370051 cri.go:89] found id: ""
	I0229 02:33:08.842497  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.842510  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:08.842518  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:08.842589  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:08.878384  370051 cri.go:89] found id: ""
	I0229 02:33:08.878412  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.878421  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:08.878427  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:08.878484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:08.924228  370051 cri.go:89] found id: ""
	I0229 02:33:08.924262  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.924275  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:08.924295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:08.924374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:08.966122  370051 cri.go:89] found id: ""
	I0229 02:33:08.966157  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.966168  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:08.966177  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:08.966254  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:09.011109  370051 cri.go:89] found id: ""
	I0229 02:33:09.011135  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.011144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:09.011152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:09.011217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:09.059716  370051 cri.go:89] found id: ""
	I0229 02:33:09.059749  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.059782  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:09.059795  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:09.059812  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:09.110564  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:09.110599  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:09.126037  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:09.126065  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:09.199827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:09.199858  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:09.199892  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:09.282624  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:09.282661  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:09.990502  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.991783  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.586475  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:13.586740  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:10.511051  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:12.512644  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:15.009478  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.829017  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:11.842826  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:11.842894  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:11.881652  370051 cri.go:89] found id: ""
	I0229 02:33:11.881689  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.881700  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:11.881709  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:11.881773  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:11.919252  370051 cri.go:89] found id: ""
	I0229 02:33:11.919291  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.919302  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:11.919309  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:11.919380  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:11.959145  370051 cri.go:89] found id: ""
	I0229 02:33:11.959175  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.959187  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:11.959196  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:11.959263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:12.002105  370051 cri.go:89] found id: ""
	I0229 02:33:12.002134  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.002145  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:12.002153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:12.002219  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:12.042157  370051 cri.go:89] found id: ""
	I0229 02:33:12.042188  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.042221  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:12.042249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:12.042326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:12.080121  370051 cri.go:89] found id: ""
	I0229 02:33:12.080150  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.080158  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:12.080165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:12.080231  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:12.119259  370051 cri.go:89] found id: ""
	I0229 02:33:12.119286  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.119294  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:12.119301  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:12.119357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:12.160136  370051 cri.go:89] found id: ""
	I0229 02:33:12.160171  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.160182  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:12.160195  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:12.160209  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:12.209770  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:12.209810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:12.226429  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:12.226460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:12.295933  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:12.295966  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:12.295978  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:12.380794  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:12.380843  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:14.971692  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:14.986085  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:14.986162  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:15.024756  370051 cri.go:89] found id: ""
	I0229 02:33:15.024788  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.024801  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:15.024809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:15.024868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:15.065131  370051 cri.go:89] found id: ""
	I0229 02:33:15.065159  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.065172  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:15.065180  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:15.065251  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:15.104744  370051 cri.go:89] found id: ""
	I0229 02:33:15.104775  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.104786  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:15.104794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:15.104858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:15.145710  370051 cri.go:89] found id: ""
	I0229 02:33:15.145737  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.145745  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:15.145752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:15.145803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:15.184908  370051 cri.go:89] found id: ""
	I0229 02:33:15.184933  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.184942  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:15.184951  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:15.185016  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:15.230195  370051 cri.go:89] found id: ""
	I0229 02:33:15.230220  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.230241  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:15.230249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:15.230326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:15.269750  370051 cri.go:89] found id: ""
	I0229 02:33:15.269774  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.269783  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:15.269789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:15.269852  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:15.312331  370051 cri.go:89] found id: ""
	I0229 02:33:15.312360  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.312373  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:15.312387  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:15.312402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:15.363032  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:15.363067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:15.422421  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:15.422463  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:15.445235  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:15.445272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:15.530010  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:15.530047  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:15.530066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:14.489188  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:16.991028  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:16.090733  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:18.587045  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:17.510766  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:20.009379  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:18.116265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:18.130375  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:18.130439  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:18.167740  370051 cri.go:89] found id: ""
	I0229 02:33:18.167767  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.167776  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:18.167782  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:18.167843  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:18.205621  370051 cri.go:89] found id: ""
	I0229 02:33:18.205653  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.205662  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:18.205670  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:18.205725  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:18.246917  370051 cri.go:89] found id: ""
	I0229 02:33:18.246954  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.246975  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:18.246983  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:18.247040  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:18.285087  370051 cri.go:89] found id: ""
	I0229 02:33:18.285114  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.285123  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:18.285130  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:18.285181  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:18.323989  370051 cri.go:89] found id: ""
	I0229 02:33:18.324018  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.324027  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:18.324033  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:18.324094  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:18.372741  370051 cri.go:89] found id: ""
	I0229 02:33:18.372769  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.372779  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:18.372785  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:18.372838  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:18.432846  370051 cri.go:89] found id: ""
	I0229 02:33:18.432888  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.432900  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:18.432908  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:18.432977  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:18.486357  370051 cri.go:89] found id: ""
	I0229 02:33:18.486387  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.486399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:18.486411  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:18.486431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:18.532363  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:18.532402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:18.582035  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:18.582076  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:18.599009  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:18.599050  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:18.673580  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:18.673609  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:18.673625  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:19.490704  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.990251  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.085541  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:23.086148  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:22.009826  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:24.509388  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.259614  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:21.274150  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:21.274250  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:21.311859  370051 cri.go:89] found id: ""
	I0229 02:33:21.311895  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.311908  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:21.311917  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:21.311984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:21.364260  370051 cri.go:89] found id: ""
	I0229 02:33:21.364296  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.364309  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:21.364317  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:21.364391  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:21.424181  370051 cri.go:89] found id: ""
	I0229 02:33:21.424217  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.424229  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:21.424237  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:21.424306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:21.482499  370051 cri.go:89] found id: ""
	I0229 02:33:21.482531  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.482543  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:21.482551  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:21.482621  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:21.523743  370051 cri.go:89] found id: ""
	I0229 02:33:21.523775  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.523785  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:21.523793  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:21.523868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:21.563759  370051 cri.go:89] found id: ""
	I0229 02:33:21.563789  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.563800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:21.563809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:21.563889  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:21.610162  370051 cri.go:89] found id: ""
	I0229 02:33:21.610265  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.610286  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:21.610295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:21.610378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:21.652001  370051 cri.go:89] found id: ""
	I0229 02:33:21.652028  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.652037  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:21.652047  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:21.652060  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:21.704028  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:21.704067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:21.720924  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:21.720956  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:21.798619  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:21.798645  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:21.798664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:21.888445  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:21.888506  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.437647  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:24.459963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:24.460041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:24.503906  370051 cri.go:89] found id: ""
	I0229 02:33:24.503940  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.503950  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:24.503956  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:24.504031  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:24.541893  370051 cri.go:89] found id: ""
	I0229 02:33:24.541919  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.541929  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:24.541935  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:24.541991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:24.584717  370051 cri.go:89] found id: ""
	I0229 02:33:24.584748  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.584760  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:24.584769  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:24.584836  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:24.623334  370051 cri.go:89] found id: ""
	I0229 02:33:24.623362  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.623371  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:24.623378  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:24.623447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:24.665862  370051 cri.go:89] found id: ""
	I0229 02:33:24.665890  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.665902  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:24.665911  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:24.665984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:24.705509  370051 cri.go:89] found id: ""
	I0229 02:33:24.705540  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.705551  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:24.705560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:24.705634  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:24.745348  370051 cri.go:89] found id: ""
	I0229 02:33:24.745389  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.745399  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:24.745406  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:24.745462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:24.785490  370051 cri.go:89] found id: ""
	I0229 02:33:24.785520  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.785529  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:24.785539  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:24.785553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.829556  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:24.829589  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:24.877914  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:24.877949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:24.894590  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:24.894623  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:24.972948  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:24.972981  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:24.972997  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:23.990806  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:26.489823  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:25.586684  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:27.588321  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:26.509932  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:29.010692  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:27.555364  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:27.570747  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:27.570820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:27.609771  370051 cri.go:89] found id: ""
	I0229 02:33:27.609800  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.609807  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:27.609813  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:27.609863  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:27.654316  370051 cri.go:89] found id: ""
	I0229 02:33:27.654347  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.654360  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:27.654376  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:27.654453  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:27.695089  370051 cri.go:89] found id: ""
	I0229 02:33:27.695125  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.695137  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:27.695143  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:27.695199  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:27.733846  370051 cri.go:89] found id: ""
	I0229 02:33:27.733881  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.733893  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:27.733901  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:27.733972  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:27.772906  370051 cri.go:89] found id: ""
	I0229 02:33:27.772940  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.772953  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:27.772961  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:27.773039  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:27.812266  370051 cri.go:89] found id: ""
	I0229 02:33:27.812295  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.812308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:27.812316  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:27.812387  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:27.849272  370051 cri.go:89] found id: ""
	I0229 02:33:27.849305  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.849316  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:27.849324  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:27.849393  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:27.887495  370051 cri.go:89] found id: ""
	I0229 02:33:27.887528  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.887541  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:27.887554  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:27.887569  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:27.972220  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:27.972261  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:28.020757  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:28.020797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:28.070347  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:28.070381  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:28.089905  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:28.089947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:28.183306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:30.683857  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:30.701341  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:30.701443  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:30.741342  370051 cri.go:89] found id: ""
	I0229 02:33:30.741376  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.741387  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:30.741397  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:30.741475  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:30.785372  370051 cri.go:89] found id: ""
	I0229 02:33:30.785415  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.785427  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:30.785435  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:30.785506  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:30.828402  370051 cri.go:89] found id: ""
	I0229 02:33:30.828428  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.828436  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:30.828442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:30.828504  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:30.872656  370051 cri.go:89] found id: ""
	I0229 02:33:30.872684  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.872695  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:30.872704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:30.872770  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:30.918746  370051 cri.go:89] found id: ""
	I0229 02:33:30.918775  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.918786  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:30.918794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:30.918867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:30.956794  370051 cri.go:89] found id: ""
	I0229 02:33:30.956838  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.956852  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:30.956860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:30.956935  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:31.000595  370051 cri.go:89] found id: ""
	I0229 02:33:31.000618  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.000628  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:31.000637  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:31.000699  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:31.039060  370051 cri.go:89] found id: ""
	I0229 02:33:31.039089  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.039100  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:31.039111  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:31.039133  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:31.089919  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:31.089949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:31.110276  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:31.110315  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:33:28.990807  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:30.993882  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:33.489703  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:30.086658  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:32.586407  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:34.588272  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:31.509534  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:33.511710  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	W0229 02:33:31.235760  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:31.235791  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:31.235810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:31.323257  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:31.323322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:33.872956  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:33.887953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:33.888034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:33.927887  370051 cri.go:89] found id: ""
	I0229 02:33:33.927926  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.927938  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:33.927945  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:33.928001  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:33.967301  370051 cri.go:89] found id: ""
	I0229 02:33:33.967333  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.967345  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:33.967356  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:33.967425  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:34.009949  370051 cri.go:89] found id: ""
	I0229 02:33:34.009982  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.009992  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:34.009999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:34.010073  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:34.056197  370051 cri.go:89] found id: ""
	I0229 02:33:34.056224  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.056232  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:34.056239  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:34.056314  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:34.107089  370051 cri.go:89] found id: ""
	I0229 02:33:34.107120  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.107132  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:34.107140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:34.107206  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:34.162822  370051 cri.go:89] found id: ""
	I0229 02:33:34.162856  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.162875  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:34.162884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:34.162961  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:34.209963  370051 cri.go:89] found id: ""
	I0229 02:33:34.209993  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.210001  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:34.210008  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:34.210078  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:34.250688  370051 cri.go:89] found id: ""
	I0229 02:33:34.250726  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.250735  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:34.250754  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:34.250768  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:34.298953  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:34.298993  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:34.314067  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:34.314100  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:34.393515  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:34.393536  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:34.393551  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:34.477034  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:34.477078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:35.990175  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:38.490651  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:37.087261  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:39.588400  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:36.009933  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:38.508929  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:37.025152  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:37.040410  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:37.040491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:37.077922  370051 cri.go:89] found id: ""
	I0229 02:33:37.077953  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.077965  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:37.077973  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:37.078041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:37.137895  370051 cri.go:89] found id: ""
	I0229 02:33:37.137925  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.137938  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:37.137946  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:37.138012  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:37.199291  370051 cri.go:89] found id: ""
	I0229 02:33:37.199324  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.199336  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:37.199344  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:37.199422  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:37.242817  370051 cri.go:89] found id: ""
	I0229 02:33:37.242848  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.242857  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:37.242863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:37.242917  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:37.282171  370051 cri.go:89] found id: ""
	I0229 02:33:37.282196  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.282204  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:37.282211  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:37.282284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:37.328608  370051 cri.go:89] found id: ""
	I0229 02:33:37.328639  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.328647  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:37.328658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:37.328724  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:37.372965  370051 cri.go:89] found id: ""
	I0229 02:33:37.372996  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.373008  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:37.373016  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:37.373091  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:37.417597  370051 cri.go:89] found id: ""
	I0229 02:33:37.417630  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.417642  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:37.417655  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:37.417673  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:37.472023  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:37.472058  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:37.487931  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:37.487961  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:37.568196  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:37.568227  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:37.568245  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:37.658485  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:37.658523  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.203039  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:40.220385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:40.220477  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:40.262962  370051 cri.go:89] found id: ""
	I0229 02:33:40.262993  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.263004  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:40.263016  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:40.263086  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:40.302452  370051 cri.go:89] found id: ""
	I0229 02:33:40.302483  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.302495  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:40.302503  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:40.302560  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:40.342509  370051 cri.go:89] found id: ""
	I0229 02:33:40.342544  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.342557  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:40.342566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:40.342644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:40.385585  370051 cri.go:89] found id: ""
	I0229 02:33:40.385615  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.385629  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:40.385638  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:40.385703  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:40.426839  370051 cri.go:89] found id: ""
	I0229 02:33:40.426874  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.426887  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:40.426896  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:40.426962  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:40.467217  370051 cri.go:89] found id: ""
	I0229 02:33:40.467241  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.467251  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:40.467257  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:40.467332  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:40.513525  370051 cri.go:89] found id: ""
	I0229 02:33:40.513546  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.513553  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:40.513559  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:40.513609  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:40.554187  370051 cri.go:89] found id: ""
	I0229 02:33:40.554256  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.554269  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:40.554282  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:40.554301  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:40.636447  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:40.636477  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:40.636494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:40.716381  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:40.716423  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.761946  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:40.761982  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:40.812828  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:40.812862  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:40.492178  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.991517  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.086413  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:44.586663  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:40.510266  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.510702  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:45.013362  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:43.336139  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:43.352278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:43.352361  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:43.392555  370051 cri.go:89] found id: ""
	I0229 02:33:43.392593  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.392607  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:43.392616  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:43.392689  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:43.438169  370051 cri.go:89] found id: ""
	I0229 02:33:43.438202  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.438216  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:43.438242  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:43.438331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:43.476987  370051 cri.go:89] found id: ""
	I0229 02:33:43.477021  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.477033  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:43.477042  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:43.477109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:43.526728  370051 cri.go:89] found id: ""
	I0229 02:33:43.526758  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.526767  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:43.526778  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:43.526833  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:43.572222  370051 cri.go:89] found id: ""
	I0229 02:33:43.572260  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.572273  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:43.572282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:43.572372  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:43.618650  370051 cri.go:89] found id: ""
	I0229 02:33:43.618679  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.618691  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:43.618698  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:43.618764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:43.658069  370051 cri.go:89] found id: ""
	I0229 02:33:43.658104  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.658116  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:43.658126  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:43.658196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:43.700790  370051 cri.go:89] found id: ""
	I0229 02:33:43.700829  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.700841  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:43.700855  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:43.700874  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:43.753330  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:43.753372  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:43.770261  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:43.770294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:43.842407  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:43.842430  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:43.842447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:43.935427  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:43.935470  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:45.490296  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.490514  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.088903  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:49.585902  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.510105  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:49.511420  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:46.498694  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:46.516463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:46.516541  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:46.554731  370051 cri.go:89] found id: ""
	I0229 02:33:46.554757  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.554766  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:46.554772  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:46.554835  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:46.596851  370051 cri.go:89] found id: ""
	I0229 02:33:46.596892  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.596905  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:46.596912  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:46.596981  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:46.634978  370051 cri.go:89] found id: ""
	I0229 02:33:46.635008  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.635017  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:46.635024  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:46.635089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:46.675302  370051 cri.go:89] found id: ""
	I0229 02:33:46.675334  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.675347  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:46.675355  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:46.675423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:46.717366  370051 cri.go:89] found id: ""
	I0229 02:33:46.717402  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.717413  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:46.717421  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:46.717484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:46.756130  370051 cri.go:89] found id: ""
	I0229 02:33:46.756160  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.756169  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:46.756176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:46.756228  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:46.794283  370051 cri.go:89] found id: ""
	I0229 02:33:46.794312  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.794320  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:46.794328  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:46.794384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:46.836646  370051 cri.go:89] found id: ""
	I0229 02:33:46.836679  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.836691  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:46.836703  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:46.836721  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:46.926532  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:46.926578  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:46.981883  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:46.981915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:47.033571  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:47.033612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:47.049803  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:47.049833  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:47.123389  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:49.623827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:49.638175  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:49.638263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:49.675895  370051 cri.go:89] found id: ""
	I0229 02:33:49.675929  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.675941  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:49.675950  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:49.676009  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:49.720679  370051 cri.go:89] found id: ""
	I0229 02:33:49.720718  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.720730  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:49.720739  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:49.720808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:49.762299  370051 cri.go:89] found id: ""
	I0229 02:33:49.762329  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.762342  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:49.762350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:49.762426  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:49.809330  370051 cri.go:89] found id: ""
	I0229 02:33:49.809364  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.809376  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:49.809391  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:49.809455  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:49.859176  370051 cri.go:89] found id: ""
	I0229 02:33:49.859206  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.859218  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:49.859226  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:49.859292  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:49.914844  370051 cri.go:89] found id: ""
	I0229 02:33:49.914877  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.914890  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:49.914897  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:49.914967  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:49.969640  370051 cri.go:89] found id: ""
	I0229 02:33:49.969667  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.969676  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:49.969682  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:49.969736  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:50.010924  370051 cri.go:89] found id: ""
	I0229 02:33:50.010953  370051 logs.go:276] 0 containers: []
	W0229 02:33:50.010965  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:50.010976  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:50.011002  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:50.089462  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:50.089494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:50.132098  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:50.132129  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:50.182693  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:50.182737  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:50.198209  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:50.198256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:50.281521  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:49.991831  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:52.489891  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:51.586298  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:53.587249  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:51.513176  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:54.010209  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:52.781677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:52.795962  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:52.796055  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:52.833670  370051 cri.go:89] found id: ""
	I0229 02:33:52.833706  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.833718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:52.833728  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:52.833795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:52.889497  370051 cri.go:89] found id: ""
	I0229 02:33:52.889529  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.889539  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:52.889547  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:52.889616  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:52.952880  370051 cri.go:89] found id: ""
	I0229 02:33:52.952915  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.952927  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:52.952935  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:52.953002  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:53.008380  370051 cri.go:89] found id: ""
	I0229 02:33:53.008409  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.008420  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:53.008434  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:53.008502  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:53.047877  370051 cri.go:89] found id: ""
	I0229 02:33:53.047911  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.047922  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:53.047931  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:53.047999  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:53.086080  370051 cri.go:89] found id: ""
	I0229 02:33:53.086107  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.086118  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:53.086127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:53.086193  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:53.128334  370051 cri.go:89] found id: ""
	I0229 02:33:53.128368  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.128378  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:53.128385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:53.128457  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:53.172201  370051 cri.go:89] found id: ""
	I0229 02:33:53.172232  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.172245  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:53.172258  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:53.172275  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:53.222608  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:53.222648  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:53.239888  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:53.239918  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:53.315827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:53.315850  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:53.315864  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:53.395457  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:53.395498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:55.943418  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:55.960562  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:55.960638  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:56.005181  370051 cri.go:89] found id: ""
	I0229 02:33:56.005210  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.005221  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:56.005229  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:56.005293  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:56.046700  370051 cri.go:89] found id: ""
	I0229 02:33:56.046731  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.046743  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:56.046750  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:56.046814  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:56.088459  370051 cri.go:89] found id: ""
	I0229 02:33:56.088486  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.088497  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:56.088505  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:56.088571  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:56.127729  370051 cri.go:89] found id: ""
	I0229 02:33:56.127762  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.127774  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:56.127783  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:56.127862  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:54.491536  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.493973  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.089188  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:58.586570  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.011539  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:58.509708  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.169980  370051 cri.go:89] found id: ""
	I0229 02:33:56.170011  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.170022  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:56.170030  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:56.170098  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:56.210650  370051 cri.go:89] found id: ""
	I0229 02:33:56.210682  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.210694  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:56.210704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:56.210771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:56.247342  370051 cri.go:89] found id: ""
	I0229 02:33:56.247380  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.247391  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:56.247400  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:56.247474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:56.286322  370051 cri.go:89] found id: ""
	I0229 02:33:56.286353  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.286364  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:56.286375  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:56.286393  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:56.335144  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:56.335184  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:56.351322  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:56.351359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:56.424251  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:56.424282  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:56.424299  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:56.506053  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:56.506082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.052805  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:59.067508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:59.067599  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:59.114213  370051 cri.go:89] found id: ""
	I0229 02:33:59.114256  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.114268  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:59.114276  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:59.114327  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:59.161087  370051 cri.go:89] found id: ""
	I0229 02:33:59.161123  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.161136  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:59.161145  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:59.161217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:59.206071  370051 cri.go:89] found id: ""
	I0229 02:33:59.206101  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.206114  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:59.206122  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:59.206196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:59.245152  370051 cri.go:89] found id: ""
	I0229 02:33:59.245179  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.245188  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:59.245194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:59.245247  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:59.286047  370051 cri.go:89] found id: ""
	I0229 02:33:59.286080  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.286092  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:59.286101  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:59.286165  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:59.323171  370051 cri.go:89] found id: ""
	I0229 02:33:59.323203  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.323214  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:59.323222  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:59.323288  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:59.364434  370051 cri.go:89] found id: ""
	I0229 02:33:59.364464  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.364477  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:59.364485  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:59.364554  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:59.405902  370051 cri.go:89] found id: ""
	I0229 02:33:59.405929  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.405938  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:59.405948  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:59.405980  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:59.481810  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:59.481841  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:59.481858  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:59.575726  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:59.575767  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.634808  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:59.634849  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:59.702513  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:59.702552  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:58.991152  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:01.490426  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:00.587747  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:02.594677  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:01.010009  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:03.509687  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:02.219660  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:02.234037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:02.234105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:02.277956  370051 cri.go:89] found id: ""
	I0229 02:34:02.277982  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.277991  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:02.277998  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:02.278071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:02.322832  370051 cri.go:89] found id: ""
	I0229 02:34:02.322856  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.322869  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:02.322878  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:02.322949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:02.368612  370051 cri.go:89] found id: ""
	I0229 02:34:02.368646  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.368659  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:02.368668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:02.368731  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:02.412436  370051 cri.go:89] found id: ""
	I0229 02:34:02.412466  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.412479  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:02.412486  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:02.412544  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:02.448682  370051 cri.go:89] found id: ""
	I0229 02:34:02.448713  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.448724  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:02.448733  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:02.448803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:02.486676  370051 cri.go:89] found id: ""
	I0229 02:34:02.486705  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.486723  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:02.486730  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:02.486795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:02.531814  370051 cri.go:89] found id: ""
	I0229 02:34:02.531841  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.531852  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:02.531860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:02.531934  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:02.569800  370051 cri.go:89] found id: ""
	I0229 02:34:02.569835  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.569845  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:02.569857  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:02.569871  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:02.623903  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:02.623937  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:02.643856  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:02.643884  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:02.735520  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:02.735544  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:02.735563  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:02.816572  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:02.816612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.371459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:05.385179  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:05.385255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:05.424653  370051 cri.go:89] found id: ""
	I0229 02:34:05.424679  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.424687  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:05.424694  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:05.424752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:05.463726  370051 cri.go:89] found id: ""
	I0229 02:34:05.463754  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.463763  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:05.463769  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:05.463823  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:05.510367  370051 cri.go:89] found id: ""
	I0229 02:34:05.510396  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.510407  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:05.510415  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:05.510480  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:05.548421  370051 cri.go:89] found id: ""
	I0229 02:34:05.548445  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.548455  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:05.548461  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:05.548527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:05.588778  370051 cri.go:89] found id: ""
	I0229 02:34:05.588801  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.588809  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:05.588815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:05.588875  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:05.638449  370051 cri.go:89] found id: ""
	I0229 02:34:05.638479  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.638490  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:05.638506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:05.638567  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:05.709921  370051 cri.go:89] found id: ""
	I0229 02:34:05.709950  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.709964  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:05.709972  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:05.710038  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:05.756965  370051 cri.go:89] found id: ""
	I0229 02:34:05.756992  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.757000  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:05.757010  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:05.757025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:05.826878  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:05.826904  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:05.826921  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:05.909205  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:05.909256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.954537  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:05.954594  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:06.004157  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:06.004203  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:03.989381  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.990323  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.491379  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.086296  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:07.586477  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.511758  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.009545  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:10.010247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.522975  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:08.539247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:08.539326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:08.579776  370051 cri.go:89] found id: ""
	I0229 02:34:08.579806  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.579817  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:08.579826  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:08.579890  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:08.628415  370051 cri.go:89] found id: ""
	I0229 02:34:08.628444  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.628456  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:08.628468  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:08.628534  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:08.690499  370051 cri.go:89] found id: ""
	I0229 02:34:08.690530  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.690540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:08.690547  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:08.690613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:08.739755  370051 cri.go:89] found id: ""
	I0229 02:34:08.739788  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.739801  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:08.739809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:08.739906  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:08.781693  370051 cri.go:89] found id: ""
	I0229 02:34:08.781721  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.781733  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:08.781742  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:08.781808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:08.818605  370051 cri.go:89] found id: ""
	I0229 02:34:08.818637  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.818645  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:08.818652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:08.818713  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:08.861533  370051 cri.go:89] found id: ""
	I0229 02:34:08.861559  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.861569  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:08.861578  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:08.861658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:08.902727  370051 cri.go:89] found id: ""
	I0229 02:34:08.902758  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.902771  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:08.902784  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:08.902801  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:08.948527  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:08.948567  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:08.999883  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:08.999916  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:09.015438  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:09.015467  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:09.087965  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:09.087994  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:09.088010  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:10.990135  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.991074  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:10.085517  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.086653  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:14.086817  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.510247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:15.010412  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:11.671443  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:11.702197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:11.702322  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:11.755104  370051 cri.go:89] found id: ""
	I0229 02:34:11.755136  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.755147  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:11.755153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:11.755204  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:11.794190  370051 cri.go:89] found id: ""
	I0229 02:34:11.794218  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.794239  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:11.794247  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:11.794310  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:11.837330  370051 cri.go:89] found id: ""
	I0229 02:34:11.837360  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.837372  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:11.837380  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:11.837447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:11.876682  370051 cri.go:89] found id: ""
	I0229 02:34:11.876716  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.876726  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:11.876734  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:11.876805  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:11.922172  370051 cri.go:89] found id: ""
	I0229 02:34:11.922239  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.922262  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:11.922271  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:11.922341  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:11.962218  370051 cri.go:89] found id: ""
	I0229 02:34:11.962270  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.962283  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:11.962291  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:11.962375  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:12.002075  370051 cri.go:89] found id: ""
	I0229 02:34:12.002101  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.002110  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:12.002117  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:12.002169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:12.043337  370051 cri.go:89] found id: ""
	I0229 02:34:12.043378  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.043399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:12.043412  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:12.043428  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:12.094458  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:12.094491  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:12.112374  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:12.112401  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:12.193665  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:12.193689  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:12.193717  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:12.282510  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:12.282553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:14.828451  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:14.843626  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:14.843690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:14.884181  370051 cri.go:89] found id: ""
	I0229 02:34:14.884214  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.884226  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:14.884235  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:14.884302  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:14.926312  370051 cri.go:89] found id: ""
	I0229 02:34:14.926347  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.926361  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:14.926369  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:14.926436  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:14.969147  370051 cri.go:89] found id: ""
	I0229 02:34:14.969182  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.969195  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:14.969207  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:14.969277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:15.013000  370051 cri.go:89] found id: ""
	I0229 02:34:15.013045  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.013055  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:15.013064  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:15.013120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:15.055811  370051 cri.go:89] found id: ""
	I0229 02:34:15.055849  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.055861  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:15.055869  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:15.055939  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:15.100736  370051 cri.go:89] found id: ""
	I0229 02:34:15.100768  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.100780  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:15.100789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:15.100867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:15.140115  370051 cri.go:89] found id: ""
	I0229 02:34:15.140151  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.140164  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:15.140172  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:15.140239  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:15.183545  370051 cri.go:89] found id: ""
	I0229 02:34:15.183576  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.183588  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:15.183602  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:15.183621  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:15.258646  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:15.258676  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:15.258693  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:15.347035  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:15.347082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:15.407148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:15.407178  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:15.466695  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:15.466741  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:15.490797  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.990851  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:16.585993  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:18.587604  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.509114  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:19.509856  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.989102  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:18.005052  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:18.005126  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:18.044687  370051 cri.go:89] found id: ""
	I0229 02:34:18.044714  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.044725  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:18.044739  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:18.044815  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:18.085904  370051 cri.go:89] found id: ""
	I0229 02:34:18.085934  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.085944  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:18.085952  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:18.086017  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:18.129958  370051 cri.go:89] found id: ""
	I0229 02:34:18.129985  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.129994  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:18.129999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:18.130052  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:18.166942  370051 cri.go:89] found id: ""
	I0229 02:34:18.166979  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.166991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:18.167000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:18.167056  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:18.205297  370051 cri.go:89] found id: ""
	I0229 02:34:18.205324  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.205331  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:18.205337  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:18.205410  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:18.246415  370051 cri.go:89] found id: ""
	I0229 02:34:18.246448  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.246461  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:18.246469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:18.246527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:18.285534  370051 cri.go:89] found id: ""
	I0229 02:34:18.285573  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.285585  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:18.285600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:18.285662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:18.327624  370051 cri.go:89] found id: ""
	I0229 02:34:18.327651  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.327659  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:18.327670  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:18.327684  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:18.383307  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:18.383351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:18.408127  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:18.408162  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:18.502036  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:18.502070  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:18.502093  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:18.582289  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:18.582340  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:20.490582  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:22.990210  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.086446  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:23.586600  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.511411  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:24.009976  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.135649  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:21.149411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:21.149498  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:21.198246  370051 cri.go:89] found id: ""
	I0229 02:34:21.198286  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.198298  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:21.198306  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:21.198378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:21.240168  370051 cri.go:89] found id: ""
	I0229 02:34:21.240195  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.240203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:21.240209  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:21.240275  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:21.281243  370051 cri.go:89] found id: ""
	I0229 02:34:21.281277  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.281288  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:21.281296  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:21.281359  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:21.321573  370051 cri.go:89] found id: ""
	I0229 02:34:21.321609  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.321621  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:21.321629  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:21.321693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:21.375156  370051 cri.go:89] found id: ""
	I0229 02:34:21.375212  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.375226  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:21.375234  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:21.375308  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:21.430450  370051 cri.go:89] found id: ""
	I0229 02:34:21.430487  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.430499  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:21.430508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:21.430576  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:21.475095  370051 cri.go:89] found id: ""
	I0229 02:34:21.475124  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.475135  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:21.475144  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:21.475215  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:21.517378  370051 cri.go:89] found id: ""
	I0229 02:34:21.517403  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.517412  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:21.517424  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:21.517444  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:21.534103  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:21.534147  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:21.608375  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:21.608400  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:21.608412  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:21.691912  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:21.691950  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:21.744366  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:21.744406  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.295384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:24.309456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:24.309539  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:24.370125  370051 cri.go:89] found id: ""
	I0229 02:34:24.370156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.370167  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:24.370175  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:24.370256  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:24.439458  370051 cri.go:89] found id: ""
	I0229 02:34:24.439487  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.439499  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:24.439506  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:24.439639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:24.478070  370051 cri.go:89] found id: ""
	I0229 02:34:24.478105  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.478119  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:24.478127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:24.478194  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:24.517128  370051 cri.go:89] found id: ""
	I0229 02:34:24.517156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.517168  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:24.517176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:24.517243  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:24.555502  370051 cri.go:89] found id: ""
	I0229 02:34:24.555537  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.555549  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:24.555557  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:24.555625  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:24.601261  370051 cri.go:89] found id: ""
	I0229 02:34:24.601295  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.601307  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:24.601315  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:24.601389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:24.639110  370051 cri.go:89] found id: ""
	I0229 02:34:24.639141  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.639153  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:24.639161  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:24.639224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:24.681448  370051 cri.go:89] found id: ""
	I0229 02:34:24.681478  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.681487  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:24.681498  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:24.681517  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.730735  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:24.730775  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:24.746996  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:24.747031  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:24.827581  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:24.827608  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:24.827628  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:24.909551  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:24.909596  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:24.990581  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.489787  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:25.586672  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.586999  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:26.509819  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:29.009014  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.455967  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:27.477411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:27.477487  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:27.523163  370051 cri.go:89] found id: ""
	I0229 02:34:27.523189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.523198  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:27.523203  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:27.523258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:27.562298  370051 cri.go:89] found id: ""
	I0229 02:34:27.562330  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.562343  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:27.562350  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:27.562420  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:27.603506  370051 cri.go:89] found id: ""
	I0229 02:34:27.603532  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.603540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:27.603554  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:27.603619  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:27.646971  370051 cri.go:89] found id: ""
	I0229 02:34:27.647002  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.647014  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:27.647031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:27.647109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:27.685124  370051 cri.go:89] found id: ""
	I0229 02:34:27.685149  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.685160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:27.685169  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:27.685235  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:27.726976  370051 cri.go:89] found id: ""
	I0229 02:34:27.727007  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.727018  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:27.727026  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:27.727089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:27.767159  370051 cri.go:89] found id: ""
	I0229 02:34:27.767189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.767197  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:27.767204  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:27.767272  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:27.810377  370051 cri.go:89] found id: ""
	I0229 02:34:27.810411  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.810420  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:27.810431  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:27.810447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:27.858094  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:27.858136  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:27.874407  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:27.874440  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:27.953065  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:27.953092  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:27.953108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:28.042244  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:28.042278  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:30.588227  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:30.604954  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:30.605037  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:30.642069  370051 cri.go:89] found id: ""
	I0229 02:34:30.642100  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.642108  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:30.642119  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:30.642187  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:30.686212  370051 cri.go:89] found id: ""
	I0229 02:34:30.686264  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.686277  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:30.686285  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:30.686364  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:30.726668  370051 cri.go:89] found id: ""
	I0229 02:34:30.726702  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.726715  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:30.726723  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:30.726788  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:30.766850  370051 cri.go:89] found id: ""
	I0229 02:34:30.766883  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.766895  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:30.766904  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:30.766979  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:30.808972  370051 cri.go:89] found id: ""
	I0229 02:34:30.809002  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.809015  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:30.809023  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:30.809093  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:30.851992  370051 cri.go:89] found id: ""
	I0229 02:34:30.852016  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.852025  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:30.852031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:30.852096  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:30.891100  370051 cri.go:89] found id: ""
	I0229 02:34:30.891132  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.891144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:30.891157  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:30.891227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:30.931740  370051 cri.go:89] found id: ""
	I0229 02:34:30.931768  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.931777  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:30.931787  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:30.931808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:31.010896  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:31.010919  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:31.010936  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:31.094626  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:31.094662  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:29.490211  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.490659  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:30.086898  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:32.587485  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.010003  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:33.510267  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.150765  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:31.150804  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:31.202932  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:31.202976  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:33.723355  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:33.738651  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:33.738753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:33.778255  370051 cri.go:89] found id: ""
	I0229 02:34:33.778287  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.778299  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:33.778307  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:33.778384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:33.818360  370051 cri.go:89] found id: ""
	I0229 02:34:33.818396  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.818406  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:33.818412  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:33.818564  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:33.866781  370051 cri.go:89] found id: ""
	I0229 02:34:33.866814  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.866824  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:33.866831  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:33.866891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:33.910013  370051 cri.go:89] found id: ""
	I0229 02:34:33.910051  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.910063  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:33.910072  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:33.910146  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:33.956068  370051 cri.go:89] found id: ""
	I0229 02:34:33.956098  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.956106  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:33.956113  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:33.956170  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:34.004997  370051 cri.go:89] found id: ""
	I0229 02:34:34.005027  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.005038  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:34.005047  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:34.005113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:34.059266  370051 cri.go:89] found id: ""
	I0229 02:34:34.059293  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.059302  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:34.059307  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:34.059363  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:34.105601  370051 cri.go:89] found id: ""
	I0229 02:34:34.105631  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.105643  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:34.105654  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:34.105669  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:34.208723  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:34.208764  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:34.262105  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:34.262137  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:34.314528  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:34.314571  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:34.332441  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:34.332477  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:34.406303  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:33.990257  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.490844  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:35.085482  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:37.086532  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:39.087022  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.015574  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:38.510064  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.906814  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:36.922297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:36.922377  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:36.967550  370051 cri.go:89] found id: ""
	I0229 02:34:36.967578  370051 logs.go:276] 0 containers: []
	W0229 02:34:36.967589  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:36.967599  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:36.967662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:37.007589  370051 cri.go:89] found id: ""
	I0229 02:34:37.007614  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.007624  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:37.007632  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:37.007706  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:37.048230  370051 cri.go:89] found id: ""
	I0229 02:34:37.048260  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.048273  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:37.048281  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:37.048354  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:37.089329  370051 cri.go:89] found id: ""
	I0229 02:34:37.089355  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.089365  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:37.089373  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:37.089441  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:37.144654  370051 cri.go:89] found id: ""
	I0229 02:34:37.144687  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.144699  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:37.144708  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:37.144778  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:37.203822  370051 cri.go:89] found id: ""
	I0229 02:34:37.203857  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.203868  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:37.203876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:37.203948  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:37.250369  370051 cri.go:89] found id: ""
	I0229 02:34:37.250398  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.250410  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:37.250417  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:37.250490  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:37.290924  370051 cri.go:89] found id: ""
	I0229 02:34:37.290957  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.290969  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:37.290981  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:37.290995  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:37.343878  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:37.343920  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:37.359307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:37.359336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:37.435264  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:37.435292  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:37.435309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:37.518274  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:37.518309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:40.062232  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:40.079883  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:40.079957  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:40.123826  370051 cri.go:89] found id: ""
	I0229 02:34:40.123856  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.123866  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:40.123874  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:40.123943  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:40.190273  370051 cri.go:89] found id: ""
	I0229 02:34:40.190321  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.190332  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:40.190338  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:40.190395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:40.232921  370051 cri.go:89] found id: ""
	I0229 02:34:40.232949  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.232961  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:40.232968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:40.233034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:40.273490  370051 cri.go:89] found id: ""
	I0229 02:34:40.273517  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.273526  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:40.273538  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:40.273594  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:40.317121  370051 cri.go:89] found id: ""
	I0229 02:34:40.317152  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.317163  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:40.317171  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:40.317230  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:40.363347  370051 cri.go:89] found id: ""
	I0229 02:34:40.363380  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.363389  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:40.363396  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:40.363459  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:40.407187  370051 cri.go:89] found id: ""
	I0229 02:34:40.407213  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.407222  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:40.407231  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:40.407282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:40.447185  370051 cri.go:89] found id: ""
	I0229 02:34:40.447218  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.447229  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:40.447242  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:40.447258  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:40.496998  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:40.497029  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:40.512520  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:40.512549  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:40.589150  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:40.589173  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:40.589190  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:40.677054  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:40.677096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:38.991307  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:40.992688  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.490195  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:41.585962  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.586942  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:41.009837  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.510138  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.222265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:43.236567  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:43.236629  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:43.282917  370051 cri.go:89] found id: ""
	I0229 02:34:43.282959  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.282976  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:43.282982  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:43.283049  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:43.329273  370051 cri.go:89] found id: ""
	I0229 02:34:43.329302  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.329313  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:43.329321  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:43.329386  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:43.366696  370051 cri.go:89] found id: ""
	I0229 02:34:43.366723  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.366732  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:43.366739  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:43.366800  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:43.405793  370051 cri.go:89] found id: ""
	I0229 02:34:43.405820  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.405828  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:43.405834  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:43.405888  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:43.442870  370051 cri.go:89] found id: ""
	I0229 02:34:43.442898  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.442906  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:43.442912  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:43.442964  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:43.484581  370051 cri.go:89] found id: ""
	I0229 02:34:43.484615  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.484626  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:43.484635  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:43.484702  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:43.530931  370051 cri.go:89] found id: ""
	I0229 02:34:43.530954  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.530963  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:43.530968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:43.531024  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:43.572810  370051 cri.go:89] found id: ""
	I0229 02:34:43.572838  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.572850  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:43.572867  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:43.572883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:43.622815  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:43.622854  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:43.637972  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:43.638012  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:43.713704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:43.713728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:43.713746  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:43.797178  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:43.797220  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:45.490670  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:47.989828  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:45.587464  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:48.090384  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:46.009454  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:48.010403  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:46.347159  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:46.361601  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:46.361682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:46.399751  370051 cri.go:89] found id: ""
	I0229 02:34:46.399784  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.399795  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:46.399804  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:46.399870  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:46.445367  370051 cri.go:89] found id: ""
	I0229 02:34:46.445398  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.445407  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:46.445413  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:46.445486  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:46.490323  370051 cri.go:89] found id: ""
	I0229 02:34:46.490363  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.490385  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:46.490393  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:46.490473  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:46.531406  370051 cri.go:89] found id: ""
	I0229 02:34:46.531441  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.531450  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:46.531456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:46.531507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:46.572759  370051 cri.go:89] found id: ""
	I0229 02:34:46.572787  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.572795  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:46.572804  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:46.572908  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:46.613055  370051 cri.go:89] found id: ""
	I0229 02:34:46.613083  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.613093  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:46.613099  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:46.613153  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:46.657504  370051 cri.go:89] found id: ""
	I0229 02:34:46.657536  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.657544  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:46.657550  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:46.657605  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:46.698008  370051 cri.go:89] found id: ""
	I0229 02:34:46.698057  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.698068  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:46.698080  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:46.698097  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:46.746648  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:46.746682  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:46.761190  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:46.761219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:46.843379  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:46.843403  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:46.843415  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:46.933493  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:46.933546  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:49.491837  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:49.508647  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:49.508717  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:49.550752  370051 cri.go:89] found id: ""
	I0229 02:34:49.550788  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.550800  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:49.550809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:49.550883  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:49.597623  370051 cri.go:89] found id: ""
	I0229 02:34:49.597663  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.597675  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:49.597683  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:49.597764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:49.635207  370051 cri.go:89] found id: ""
	I0229 02:34:49.635230  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.635238  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:49.635282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:49.635336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:49.674664  370051 cri.go:89] found id: ""
	I0229 02:34:49.674696  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.674708  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:49.674716  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:49.674777  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:49.715391  370051 cri.go:89] found id: ""
	I0229 02:34:49.715420  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.715433  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:49.715442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:49.715497  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:49.753318  370051 cri.go:89] found id: ""
	I0229 02:34:49.753352  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.753373  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:49.753382  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:49.753451  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:49.791342  370051 cri.go:89] found id: ""
	I0229 02:34:49.791369  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.791377  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:49.791384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:49.791456  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:49.838148  370051 cri.go:89] found id: ""
	I0229 02:34:49.838181  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.838191  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:49.838204  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:49.838244  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:49.891532  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:49.891568  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:49.917625  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:49.917664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:50.019436  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:50.019457  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:50.019472  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:50.108302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:50.108349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:49.991272  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.491139  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:50.586652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.586940  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:50.509504  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:53.010818  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.654561  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:52.668331  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:52.668402  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:52.718431  370051 cri.go:89] found id: ""
	I0229 02:34:52.718471  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.718484  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:52.718493  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:52.718551  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:52.757913  370051 cri.go:89] found id: ""
	I0229 02:34:52.757946  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.757957  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:52.757965  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:52.758035  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:52.796792  370051 cri.go:89] found id: ""
	I0229 02:34:52.796821  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.796833  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:52.796842  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:52.796913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:52.832157  370051 cri.go:89] found id: ""
	I0229 02:34:52.832187  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.832196  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:52.832203  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:52.832264  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:52.879170  370051 cri.go:89] found id: ""
	I0229 02:34:52.879197  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.879206  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:52.879212  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:52.879265  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:52.924219  370051 cri.go:89] found id: ""
	I0229 02:34:52.924249  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.924258  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:52.924264  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:52.924318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:52.980422  370051 cri.go:89] found id: ""
	I0229 02:34:52.980450  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.980457  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:52.980463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:52.980525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:53.026393  370051 cri.go:89] found id: ""
	I0229 02:34:53.026418  370051 logs.go:276] 0 containers: []
	W0229 02:34:53.026426  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:53.026436  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:53.026453  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:53.075135  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:53.075174  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:53.092197  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:53.092223  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:53.164397  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:53.164423  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:53.164439  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:53.250310  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:53.250366  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:55.792993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:55.807152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:55.807229  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:55.867791  370051 cri.go:89] found id: ""
	I0229 02:34:55.867821  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.867830  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:55.867847  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:55.867925  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:55.922960  370051 cri.go:89] found id: ""
	I0229 02:34:55.922989  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.923001  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:55.923009  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:55.923076  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:55.972510  370051 cri.go:89] found id: ""
	I0229 02:34:55.972541  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.972552  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:55.972560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:55.972632  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:56.011948  370051 cri.go:89] found id: ""
	I0229 02:34:56.011980  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.011990  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:56.011999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:56.012077  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:56.052624  370051 cri.go:89] found id: ""
	I0229 02:34:56.052653  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.052662  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:56.052668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:56.052722  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:56.089075  370051 cri.go:89] found id: ""
	I0229 02:34:56.089100  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.089108  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:56.089114  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:56.089180  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:56.130369  370051 cri.go:89] found id: ""
	I0229 02:34:56.130403  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.130416  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:56.130424  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:56.130496  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:54.989569  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:56.991424  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:55.085652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:57.585291  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:59.586439  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:55.509734  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:57.510165  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:59.511749  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:56.177812  370051 cri.go:89] found id: ""
	I0229 02:34:56.177843  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.177854  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:56.177875  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:56.177894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:56.224294  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:56.224336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:56.275874  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:56.275909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:56.291172  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:56.291202  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:56.364839  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:56.364870  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:56.364888  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:58.950871  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:58.966327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:58.966389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:59.005914  370051 cri.go:89] found id: ""
	I0229 02:34:59.005952  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.005968  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:59.005976  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:59.006045  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:59.043962  370051 cri.go:89] found id: ""
	I0229 02:34:59.043993  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.044005  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:59.044013  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:59.044167  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:59.089398  370051 cri.go:89] found id: ""
	I0229 02:34:59.089426  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.089434  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:59.089440  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:59.089491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:59.130786  370051 cri.go:89] found id: ""
	I0229 02:34:59.130815  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.130824  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:59.130830  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:59.130909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:59.174807  370051 cri.go:89] found id: ""
	I0229 02:34:59.174836  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.174848  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:59.174855  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:59.174929  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:59.217745  370051 cri.go:89] found id: ""
	I0229 02:34:59.217792  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.217800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:59.217806  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:59.217858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:59.260906  370051 cri.go:89] found id: ""
	I0229 02:34:59.260939  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.260950  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:59.260957  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:59.261025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:59.299114  370051 cri.go:89] found id: ""
	I0229 02:34:59.299140  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.299150  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:59.299161  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:59.299173  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:59.349630  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:59.349672  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:59.365679  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:59.365710  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:59.438234  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:59.438261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:59.438280  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:59.524185  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:59.524219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:58.991975  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:01.489719  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:03.490315  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.087731  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:04.585197  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.008802  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:04.509210  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.068320  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:02.082910  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:02.082988  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:02.122095  370051 cri.go:89] found id: ""
	I0229 02:35:02.122132  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.122145  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:02.122153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:02.122245  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:02.160982  370051 cri.go:89] found id: ""
	I0229 02:35:02.161013  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.161029  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:02.161043  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:02.161108  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:02.200603  370051 cri.go:89] found id: ""
	I0229 02:35:02.200637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.200650  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:02.200658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:02.200746  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:02.243100  370051 cri.go:89] found id: ""
	I0229 02:35:02.243126  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.243134  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:02.243140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:02.243207  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:02.282758  370051 cri.go:89] found id: ""
	I0229 02:35:02.282793  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.282806  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:02.282815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:02.282884  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:02.324402  370051 cri.go:89] found id: ""
	I0229 02:35:02.324434  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.324444  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:02.324455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:02.324520  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:02.368608  370051 cri.go:89] found id: ""
	I0229 02:35:02.368637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.368650  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:02.368658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:02.368726  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:02.411449  370051 cri.go:89] found id: ""
	I0229 02:35:02.411484  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.411497  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:02.411509  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:02.411526  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:02.427942  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:02.427974  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:02.498848  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:02.498884  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:02.498902  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:02.585701  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:02.585749  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:02.642055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:02.642096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.201769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:05.215944  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:05.216020  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:05.254080  370051 cri.go:89] found id: ""
	I0229 02:35:05.254107  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.254121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:05.254128  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:05.254179  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:05.296990  370051 cri.go:89] found id: ""
	I0229 02:35:05.297022  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.297034  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:05.297042  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:05.297111  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:05.336241  370051 cri.go:89] found id: ""
	I0229 02:35:05.336275  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.336290  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:05.336299  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:05.336395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:05.377620  370051 cri.go:89] found id: ""
	I0229 02:35:05.377649  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.377658  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:05.377664  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:05.377712  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:05.416275  370051 cri.go:89] found id: ""
	I0229 02:35:05.416303  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.416311  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:05.416318  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:05.416373  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:05.455375  370051 cri.go:89] found id: ""
	I0229 02:35:05.455412  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.455426  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:05.455436  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:05.455507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:05.495862  370051 cri.go:89] found id: ""
	I0229 02:35:05.495887  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.495897  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:05.495905  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:05.495969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:05.541218  370051 cri.go:89] found id: ""
	I0229 02:35:05.541247  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.541260  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:05.541273  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:05.541288  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:05.629982  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:05.630023  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:05.719026  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:05.719066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.785318  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:05.785359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:05.801181  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:05.801214  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:05.871333  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:05.490857  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:07.991044  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:06.587458  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:09.086313  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:06.510265  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:08.510391  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:08.371982  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:08.386451  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:08.386514  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:08.430045  370051 cri.go:89] found id: ""
	I0229 02:35:08.430077  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.430090  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:08.430099  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:08.430169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:08.470547  370051 cri.go:89] found id: ""
	I0229 02:35:08.470583  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.470596  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:08.470604  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:08.470671  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:08.512637  370051 cri.go:89] found id: ""
	I0229 02:35:08.512676  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.512687  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:08.512695  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:08.512759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:08.556228  370051 cri.go:89] found id: ""
	I0229 02:35:08.556263  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.556271  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:08.556277  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:08.556335  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:08.613838  370051 cri.go:89] found id: ""
	I0229 02:35:08.613868  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.613878  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:08.613884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:08.613940  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:08.686408  370051 cri.go:89] found id: ""
	I0229 02:35:08.686442  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.686454  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:08.686462  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:08.686519  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:08.725665  370051 cri.go:89] found id: ""
	I0229 02:35:08.725697  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.725710  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:08.725719  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:08.725776  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:08.765639  370051 cri.go:89] found id: ""
	I0229 02:35:08.765666  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.765674  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:08.765684  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:08.765695  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:08.813097  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:08.813135  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:08.828880  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:08.828909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:08.903237  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:08.903261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:08.903281  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:08.991710  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:08.991745  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:10.491022  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:12.491159  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.086828  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:13.586274  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.009650  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:13.011571  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.536724  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:11.551614  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:11.551690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:11.593078  370051 cri.go:89] found id: ""
	I0229 02:35:11.593110  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.593121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:11.593129  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:11.593185  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:11.645696  370051 cri.go:89] found id: ""
	I0229 02:35:11.645729  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.645742  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:11.645751  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:11.645820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:11.691181  370051 cri.go:89] found id: ""
	I0229 02:35:11.691213  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.691226  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:11.691245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:11.691318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:11.745906  370051 cri.go:89] found id: ""
	I0229 02:35:11.745933  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.745946  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:11.745953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:11.746019  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:11.784895  370051 cri.go:89] found id: ""
	I0229 02:35:11.784927  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.784940  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:11.784949  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:11.785025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:11.825341  370051 cri.go:89] found id: ""
	I0229 02:35:11.825372  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.825384  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:11.825392  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:11.825464  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:11.862454  370051 cri.go:89] found id: ""
	I0229 02:35:11.862492  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.862505  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:11.862523  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:11.862604  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:11.908424  370051 cri.go:89] found id: ""
	I0229 02:35:11.908450  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.908459  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:11.908469  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:11.908487  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:11.956274  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:11.956313  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:11.972363  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:11.972397  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:12.052030  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:12.052057  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:12.052078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:12.138388  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:12.138431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:14.691474  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:14.724652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:14.724739  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:14.765210  370051 cri.go:89] found id: ""
	I0229 02:35:14.765237  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.765246  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:14.765253  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:14.765306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:14.808226  370051 cri.go:89] found id: ""
	I0229 02:35:14.808258  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.808270  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:14.808287  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:14.808357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:14.847999  370051 cri.go:89] found id: ""
	I0229 02:35:14.848030  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.848041  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:14.848049  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:14.848123  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:14.887221  370051 cri.go:89] found id: ""
	I0229 02:35:14.887248  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.887256  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:14.887263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:14.887339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:14.929905  370051 cri.go:89] found id: ""
	I0229 02:35:14.929933  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.929950  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:14.929956  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:14.930011  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:14.969697  370051 cri.go:89] found id: ""
	I0229 02:35:14.969739  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.969761  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:14.969770  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:14.969837  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:15.013387  370051 cri.go:89] found id: ""
	I0229 02:35:15.013418  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.013429  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:15.013437  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:15.013493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:15.058199  370051 cri.go:89] found id: ""
	I0229 02:35:15.058240  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.058253  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:15.058270  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:15.058287  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:15.110165  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:15.110213  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:15.127417  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:15.127452  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:15.203330  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:15.203370  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:15.203405  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:15.283455  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:15.283501  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:14.991352  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.490127  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:15.586556  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:18.085962  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:15.509530  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.512518  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:20.009873  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.829187  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:17.844678  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:17.844759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:17.885549  370051 cri.go:89] found id: ""
	I0229 02:35:17.885581  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.885594  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:17.885601  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:17.885670  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:17.925652  370051 cri.go:89] found id: ""
	I0229 02:35:17.925679  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.925691  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:17.925699  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:17.925766  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:17.963172  370051 cri.go:89] found id: ""
	I0229 02:35:17.963203  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.963215  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:17.963224  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:17.963282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:18.003528  370051 cri.go:89] found id: ""
	I0229 02:35:18.003560  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.003572  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:18.003579  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:18.003644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:18.046494  370051 cri.go:89] found id: ""
	I0229 02:35:18.046526  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.046537  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:18.046545  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:18.046613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:18.084963  370051 cri.go:89] found id: ""
	I0229 02:35:18.084993  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.085004  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:18.085013  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:18.085074  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:18.125521  370051 cri.go:89] found id: ""
	I0229 02:35:18.125547  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.125556  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:18.125563  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:18.125623  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:18.169963  370051 cri.go:89] found id: ""
	I0229 02:35:18.169995  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.170006  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:18.170020  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:18.170035  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:18.225414  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:18.225460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:18.242069  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:18.242108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:18.312704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:18.312728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:18.312742  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:18.397206  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:18.397249  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:20.968000  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:20.983115  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:20.983196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:21.025710  370051 cri.go:89] found id: ""
	I0229 02:35:21.025735  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.025743  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:21.025749  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:21.025812  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:21.065825  370051 cri.go:89] found id: ""
	I0229 02:35:21.065854  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.065862  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:21.065868  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:21.065928  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:21.104738  370051 cri.go:89] found id: ""
	I0229 02:35:21.104770  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.104782  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:21.104790  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:21.104871  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:19.990622  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.491026  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.491059  369591 pod_ready.go:81] duration metric: took 4m0.008454624s waiting for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	E0229 02:35:22.491069  369591 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:35:22.491077  369591 pod_ready.go:38] duration metric: took 4m5.576507129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:22.491094  369591 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:35:22.491124  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:22.491174  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:22.562384  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:22.562412  369591 cri.go:89] found id: ""
	I0229 02:35:22.562422  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:22.562487  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.567997  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:22.568073  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:22.632786  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:22.632811  369591 cri.go:89] found id: ""
	I0229 02:35:22.632822  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:22.632887  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.637899  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:22.637975  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:22.681988  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:22.682014  369591 cri.go:89] found id: ""
	I0229 02:35:22.682024  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:22.682084  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.687515  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:22.687606  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:22.732907  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:22.732931  369591 cri.go:89] found id: ""
	I0229 02:35:22.732939  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:22.732995  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.737695  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:22.737758  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:22.779316  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:22.779341  369591 cri.go:89] found id: ""
	I0229 02:35:22.779349  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:22.779413  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.786533  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:22.786617  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:22.834391  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:22.834420  369591 cri.go:89] found id: ""
	I0229 02:35:22.834430  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:22.834500  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.839386  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:22.839458  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:22.881275  369591 cri.go:89] found id: ""
	I0229 02:35:22.881304  369591 logs.go:276] 0 containers: []
	W0229 02:35:22.881317  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:22.881326  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:22.881404  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:22.932822  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:22.932846  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:22.932850  369591 cri.go:89] found id: ""
	I0229 02:35:22.932858  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:22.932913  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.938541  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.943263  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:22.943288  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:22.994089  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:22.994122  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:23.051780  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:23.051821  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:23.099220  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:23.099251  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:23.157383  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:23.157429  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:23.206125  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:23.206180  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:23.261950  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:23.261982  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:23.324394  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:23.324427  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:23.400608  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:23.400648  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:20.589079  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:23.088469  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.510074  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:24.002388  369869 pod_ready.go:81] duration metric: took 4m0.000212386s waiting for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" ...
	E0229 02:35:24.002420  369869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:35:24.002439  369869 pod_ready.go:38] duration metric: took 4m6.701505951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:24.002490  369869 kubeadm.go:640] restartCluster took 4m24.423602043s
	W0229 02:35:24.002593  369869 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:35:24.002621  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:35:21.147180  370051 cri.go:89] found id: ""
	I0229 02:35:21.147211  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.147221  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:21.147228  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:21.147284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:21.187240  370051 cri.go:89] found id: ""
	I0229 02:35:21.187275  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.187287  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:21.187295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:21.187389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:21.228873  370051 cri.go:89] found id: ""
	I0229 02:35:21.228899  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.228917  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:21.228924  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:21.228992  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:21.268827  370051 cri.go:89] found id: ""
	I0229 02:35:21.268856  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.268867  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:21.268876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:21.268970  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:21.313253  370051 cri.go:89] found id: ""
	I0229 02:35:21.313288  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.313297  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:21.313307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:21.313328  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:21.448089  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:21.448120  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:21.448146  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:21.539941  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:21.539983  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:21.590148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:21.590186  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:21.647760  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:21.647797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:24.165842  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:24.183263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:24.183345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:24.233173  370051 cri.go:89] found id: ""
	I0229 02:35:24.233208  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.233219  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:24.233228  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:24.233301  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:24.276937  370051 cri.go:89] found id: ""
	I0229 02:35:24.276977  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.276989  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:24.276998  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:24.277066  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:24.314629  370051 cri.go:89] found id: ""
	I0229 02:35:24.314665  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.314678  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:24.314686  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:24.314753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:24.367585  370051 cri.go:89] found id: ""
	I0229 02:35:24.367618  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.367630  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:24.367639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:24.367709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:24.451128  370051 cri.go:89] found id: ""
	I0229 02:35:24.451151  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.451160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:24.451167  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:24.451258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:24.497302  370051 cri.go:89] found id: ""
	I0229 02:35:24.497336  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.497348  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:24.497357  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:24.497431  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:24.544593  370051 cri.go:89] found id: ""
	I0229 02:35:24.544621  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.544632  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:24.544640  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:24.544714  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:24.584570  370051 cri.go:89] found id: ""
	I0229 02:35:24.584601  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.584613  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:24.584626  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:24.584645  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:24.669019  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:24.669044  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:24.669061  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:24.752163  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:24.752205  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:24.811945  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:24.811985  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:24.874832  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:24.874873  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:23.928222  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:23.928275  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:23.983171  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:23.983216  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:23.999343  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:23.999382  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:24.180422  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:24.180476  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:26.745283  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:26.768785  369591 api_server.go:72] duration metric: took 4m17.549714658s to wait for apiserver process to appear ...
	I0229 02:35:26.768823  369591 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:35:26.768885  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:26.768949  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:26.816275  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:26.816303  369591 cri.go:89] found id: ""
	I0229 02:35:26.816314  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:26.816379  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.820985  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:26.821062  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:26.870520  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:26.870545  369591 cri.go:89] found id: ""
	I0229 02:35:26.870555  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:26.870613  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.875785  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:26.875869  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:26.926844  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:26.926884  369591 cri.go:89] found id: ""
	I0229 02:35:26.926895  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:26.926963  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.933667  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:26.933747  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:26.988547  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:26.988575  369591 cri.go:89] found id: ""
	I0229 02:35:26.988584  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:26.988645  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.994520  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:26.994600  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:27.040568  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:27.040602  369591 cri.go:89] found id: ""
	I0229 02:35:27.040612  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:27.040679  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.046103  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:27.046161  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:27.094322  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:27.094345  369591 cri.go:89] found id: ""
	I0229 02:35:27.094357  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:27.094428  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.101702  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:27.101779  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:27.164549  369591 cri.go:89] found id: ""
	I0229 02:35:27.164584  369591 logs.go:276] 0 containers: []
	W0229 02:35:27.164596  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:27.164604  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:27.164674  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:27.219403  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:27.219431  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:27.219436  369591 cri.go:89] found id: ""
	I0229 02:35:27.219447  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:27.219510  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.226705  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.233551  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:27.233576  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:27.281111  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:27.281152  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:27.333686  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:27.333738  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:27.948683  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:27.948736  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:28.018866  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:28.018917  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:28.164820  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:28.164857  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:28.222926  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:28.222963  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:28.265708  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:28.265738  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:28.309311  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:28.309352  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:28.363295  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:28.363341  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:28.384099  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:28.384146  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:28.451988  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:28.452025  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:28.499748  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:28.499783  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:25.586753  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:27.589329  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:27.392846  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:27.419255  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:27.419339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:27.465294  370051 cri.go:89] found id: ""
	I0229 02:35:27.465325  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.465337  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:27.465345  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:27.465417  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:27.533393  370051 cri.go:89] found id: ""
	I0229 02:35:27.533424  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.533433  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:27.533441  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:27.533510  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:27.587195  370051 cri.go:89] found id: ""
	I0229 02:35:27.587221  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.587232  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:27.587240  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:27.587313  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:27.638597  370051 cri.go:89] found id: ""
	I0229 02:35:27.638624  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.638632  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:27.638639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:27.638709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:27.687695  370051 cri.go:89] found id: ""
	I0229 02:35:27.687730  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.687742  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:27.687750  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:27.687825  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:27.732275  370051 cri.go:89] found id: ""
	I0229 02:35:27.732309  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.732320  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:27.732327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:27.732389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:27.783069  370051 cri.go:89] found id: ""
	I0229 02:35:27.783109  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.783122  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:27.783133  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:27.783224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:27.832385  370051 cri.go:89] found id: ""
	I0229 02:35:27.832416  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.832429  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:27.832443  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:27.832460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:27.902610  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:27.902658  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:27.919900  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:27.919947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:28.003313  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:28.003337  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:28.003356  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:28.100814  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:28.100853  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:30.654289  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:30.683056  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:30.683141  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:30.734678  370051 cri.go:89] found id: ""
	I0229 02:35:30.734704  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.734712  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:30.734719  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:30.734771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:30.780792  370051 cri.go:89] found id: ""
	I0229 02:35:30.780821  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.780830  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:30.780837  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:30.780904  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:30.827244  370051 cri.go:89] found id: ""
	I0229 02:35:30.827269  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.827278  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:30.827285  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:30.827336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:30.871305  370051 cri.go:89] found id: ""
	I0229 02:35:30.871333  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.871342  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:30.871348  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:30.871423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:30.910095  370051 cri.go:89] found id: ""
	I0229 02:35:30.910121  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.910130  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:30.910136  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:30.910188  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:30.955234  370051 cri.go:89] found id: ""
	I0229 02:35:30.955261  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.955271  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:30.955278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:30.955345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:30.996555  370051 cri.go:89] found id: ""
	I0229 02:35:30.996589  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.996602  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:30.996611  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:30.996687  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:31.036424  370051 cri.go:89] found id: ""
	I0229 02:35:31.036454  370051 logs.go:276] 0 containers: []
	W0229 02:35:31.036464  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:31.036474  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:31.036488  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:31.107928  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:31.107987  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:31.125268  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:31.125303  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:31.053142  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:35:31.060477  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0229 02:35:31.062106  369591 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:35:31.062143  369591 api_server.go:131] duration metric: took 4.2933111s to wait for apiserver health ...
	I0229 02:35:31.062154  369591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:35:31.062189  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:31.062278  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:31.119877  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:31.119905  369591 cri.go:89] found id: ""
	I0229 02:35:31.119915  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:31.119981  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.125569  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:31.125648  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:31.193662  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:31.193693  369591 cri.go:89] found id: ""
	I0229 02:35:31.193704  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:31.193762  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.199267  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:31.199365  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:31.251832  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:31.251862  369591 cri.go:89] found id: ""
	I0229 02:35:31.251873  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:31.251935  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.258374  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:31.258477  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:31.309718  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:31.309745  369591 cri.go:89] found id: ""
	I0229 02:35:31.309753  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:31.309804  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.314949  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:31.315025  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:31.367936  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:31.367960  369591 cri.go:89] found id: ""
	I0229 02:35:31.367970  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:31.368038  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.373072  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:31.373137  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:31.420362  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:31.420390  369591 cri.go:89] found id: ""
	I0229 02:35:31.420402  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:31.420470  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.427151  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:31.427221  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:31.482289  369591 cri.go:89] found id: ""
	I0229 02:35:31.482321  369591 logs.go:276] 0 containers: []
	W0229 02:35:31.482333  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:31.482342  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:31.482405  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:31.526713  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:31.526738  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:31.526744  369591 cri.go:89] found id: ""
	I0229 02:35:31.526755  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:31.526807  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.531874  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.536727  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:31.536758  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:31.555901  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:31.555943  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:31.689587  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:31.689629  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:31.737625  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:31.737669  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:31.781015  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:31.781050  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:31.824727  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:31.824757  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:31.866867  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:31.866897  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:31.920324  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:31.920375  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:31.962783  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:31.962815  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:32.003525  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:32.003557  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:32.061377  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:32.061417  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:32.454041  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:32.454097  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:32.498969  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:32.499006  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:30.086688  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:32.087795  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:34.585435  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:35.060469  369591 system_pods.go:59] 8 kube-system pods found
	I0229 02:35:35.060503  369591 system_pods.go:61] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running
	I0229 02:35:35.060509  369591 system_pods.go:61] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running
	I0229 02:35:35.060516  369591 system_pods.go:61] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running
	I0229 02:35:35.060521  369591 system_pods.go:61] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running
	I0229 02:35:35.060525  369591 system_pods.go:61] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running
	I0229 02:35:35.060530  369591 system_pods.go:61] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running
	I0229 02:35:35.060538  369591 system_pods.go:61] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:35:35.060543  369591 system_pods.go:61] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running
	I0229 02:35:35.060553  369591 system_pods.go:74] duration metric: took 3.99838967s to wait for pod list to return data ...
	I0229 02:35:35.060563  369591 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:35:35.063638  369591 default_sa.go:45] found service account: "default"
	I0229 02:35:35.063665  369591 default_sa.go:55] duration metric: took 3.094531ms for default service account to be created ...
	I0229 02:35:35.063676  369591 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:35:35.071344  369591 system_pods.go:86] 8 kube-system pods found
	I0229 02:35:35.071366  369591 system_pods.go:89] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running
	I0229 02:35:35.071371  369591 system_pods.go:89] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running
	I0229 02:35:35.071375  369591 system_pods.go:89] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running
	I0229 02:35:35.071380  369591 system_pods.go:89] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running
	I0229 02:35:35.071385  369591 system_pods.go:89] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running
	I0229 02:35:35.071389  369591 system_pods.go:89] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running
	I0229 02:35:35.071397  369591 system_pods.go:89] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:35:35.071408  369591 system_pods.go:89] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running
	I0229 02:35:35.071420  369591 system_pods.go:126] duration metric: took 7.737446ms to wait for k8s-apps to be running ...
	I0229 02:35:35.071433  369591 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:35:35.071482  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:35.091472  369591 system_svc.go:56] duration metric: took 20.031453ms WaitForService to wait for kubelet.
	I0229 02:35:35.091504  369591 kubeadm.go:581] duration metric: took 4m25.872454283s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:35:35.091523  369591 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:35:35.095487  369591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:35:35.095509  369591 node_conditions.go:123] node cpu capacity is 2
	I0229 02:35:35.095546  369591 node_conditions.go:105] duration metric: took 4.018229ms to run NodePressure ...
	I0229 02:35:35.095567  369591 start.go:228] waiting for startup goroutines ...
	I0229 02:35:35.095580  369591 start.go:233] waiting for cluster config update ...
	I0229 02:35:35.095594  369591 start.go:242] writing updated cluster config ...
	I0229 02:35:35.095888  369591 ssh_runner.go:195] Run: rm -f paused
	I0229 02:35:35.154197  369591 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 02:35:35.156089  369591 out.go:177] * Done! kubectl is now configured to use "no-preload-247751" cluster and "default" namespace by default
	W0229 02:35:31.217691  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:31.217717  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:31.217740  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:31.313847  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:31.313883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:33.861648  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:33.876887  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:33.876954  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:33.921545  370051 cri.go:89] found id: ""
	I0229 02:35:33.921577  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.921588  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:33.921597  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:33.921658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:33.972558  370051 cri.go:89] found id: ""
	I0229 02:35:33.972584  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.972592  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:33.972599  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:33.972662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:34.020821  370051 cri.go:89] found id: ""
	I0229 02:35:34.020852  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.020862  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:34.020873  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:34.020937  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:34.064076  370051 cri.go:89] found id: ""
	I0229 02:35:34.064110  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.064121  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:34.064129  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:34.064191  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:34.108523  370051 cri.go:89] found id: ""
	I0229 02:35:34.108557  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.108568  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:34.108576  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:34.108639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:34.149444  370051 cri.go:89] found id: ""
	I0229 02:35:34.149468  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.149478  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:34.149487  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:34.149562  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:34.193780  370051 cri.go:89] found id: ""
	I0229 02:35:34.193805  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.193814  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:34.193820  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:34.193913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:34.237088  370051 cri.go:89] found id: ""
	I0229 02:35:34.237118  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.237127  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:34.237137  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:34.237151  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:34.281055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:34.281091  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:34.333886  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:34.333925  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:34.353163  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:34.353204  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:34.465925  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:34.465951  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:34.465969  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:36.587119  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:39.086456  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:37.049957  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:37.064297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:37.064384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:37.105669  370051 cri.go:89] found id: ""
	I0229 02:35:37.105703  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.105711  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:37.105720  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:37.105790  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:37.143753  370051 cri.go:89] found id: ""
	I0229 02:35:37.143788  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.143799  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:37.143808  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:37.143880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:37.180126  370051 cri.go:89] found id: ""
	I0229 02:35:37.180157  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.180166  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:37.180173  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:37.180227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:37.221135  370051 cri.go:89] found id: ""
	I0229 02:35:37.221173  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.221185  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:37.221193  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:37.221261  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:37.258888  370051 cri.go:89] found id: ""
	I0229 02:35:37.258920  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.258932  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:37.258940  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:37.259005  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:37.300970  370051 cri.go:89] found id: ""
	I0229 02:35:37.300998  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.301010  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:37.301018  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:37.301105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:37.349797  370051 cri.go:89] found id: ""
	I0229 02:35:37.349829  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.349841  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:37.349850  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:37.349916  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:37.408726  370051 cri.go:89] found id: ""
	I0229 02:35:37.408762  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.408773  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:37.408787  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:37.408805  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:37.462030  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:37.462064  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:37.477836  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:37.477868  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:37.553886  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:37.553924  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:37.553941  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:37.644637  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:37.644683  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:40.197937  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:40.212830  370051 kubeadm.go:640] restartCluster took 4m14.648338345s
	W0229 02:35:40.212984  370051 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 02:35:40.213021  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:35:40.673169  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:40.690108  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:35:40.702424  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:35:40.713782  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:35:40.713832  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:35:40.775345  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:35:40.775527  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:35:40.929045  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:35:40.929185  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:35:40.929310  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:35:41.154311  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:35:41.154449  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:35:41.162905  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:35:41.317651  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:35:41.319260  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:35:41.319358  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:35:41.319458  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:35:41.319564  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:35:41.319675  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:35:41.319772  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:35:41.319857  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:35:41.319963  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:35:41.320066  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:35:41.320166  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:35:41.320289  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:35:41.320357  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:35:41.320439  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:35:41.457291  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:35:41.599703  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:35:41.766344  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:35:41.939397  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:35:41.940740  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:35:41.090698  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:43.585822  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:41.942544  370051 out.go:204]   - Booting up control plane ...
	I0229 02:35:41.942656  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:35:41.946949  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:35:41.949540  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:35:41.950426  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:35:41.953310  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:35:45.586855  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:48.085961  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:50.585602  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:52.587992  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:55.085046  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:57.086710  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:59.590441  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:57.264698  369869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.262039409s)
	I0229 02:35:57.264826  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:57.285615  369869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:35:57.297607  369869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:35:57.309412  369869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:35:57.309471  369869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:35:57.540175  369869 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:36:02.086317  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:04.587625  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:06.714158  369869 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:36:06.714249  369869 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:36:06.714325  369869 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:36:06.714490  369869 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:36:06.714633  369869 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:36:06.714742  369869 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:36:06.716059  369869 out.go:204]   - Generating certificates and keys ...
	I0229 02:36:06.716160  369869 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:36:06.716250  369869 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:36:06.716357  369869 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:36:06.716434  369869 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:36:06.716508  369869 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:36:06.716572  369869 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:36:06.716649  369869 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:36:06.716722  369869 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:36:06.716824  369869 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:36:06.716952  369869 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:36:06.717008  369869 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:36:06.717080  369869 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:36:06.717147  369869 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:36:06.717221  369869 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:36:06.717298  369869 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:36:06.717367  369869 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:36:06.717474  369869 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:36:06.717559  369869 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:36:06.718770  369869 out.go:204]   - Booting up control plane ...
	I0229 02:36:06.718866  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:36:06.718983  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:36:06.719074  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:36:06.719230  369869 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:36:06.719364  369869 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:36:06.719431  369869 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:36:06.719628  369869 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:36:06.719749  369869 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503520 seconds
	I0229 02:36:06.719906  369869 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:36:06.720060  369869 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:36:06.720126  369869 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:36:06.720344  369869 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-071485 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:36:06.720433  369869 kubeadm.go:322] [bootstrap-token] Using token: oueq3v.8ghuyl6sece1tffl
	I0229 02:36:06.721973  369869 out.go:204]   - Configuring RBAC rules ...
	I0229 02:36:06.722107  369869 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:36:06.722252  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:36:06.722444  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:36:06.722643  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:36:06.722793  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:36:06.722937  369869 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:36:06.723081  369869 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:36:06.723119  369869 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:36:06.723188  369869 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:36:06.723198  369869 kubeadm.go:322] 
	I0229 02:36:06.723285  369869 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:36:06.723310  369869 kubeadm.go:322] 
	I0229 02:36:06.723426  369869 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:36:06.723436  369869 kubeadm.go:322] 
	I0229 02:36:06.723467  369869 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:36:06.723556  369869 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:36:06.723637  369869 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:36:06.723646  369869 kubeadm.go:322] 
	I0229 02:36:06.723713  369869 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:36:06.723722  369869 kubeadm.go:322] 
	I0229 02:36:06.723799  369869 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:36:06.723809  369869 kubeadm.go:322] 
	I0229 02:36:06.723869  369869 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:36:06.723979  369869 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:36:06.724073  369869 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:36:06.724083  369869 kubeadm.go:322] 
	I0229 02:36:06.724178  369869 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:36:06.724269  369869 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:36:06.724279  369869 kubeadm.go:322] 
	I0229 02:36:06.724389  369869 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token oueq3v.8ghuyl6sece1tffl \
	I0229 02:36:06.724520  369869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 \
	I0229 02:36:06.724552  369869 kubeadm.go:322] 	--control-plane 
	I0229 02:36:06.724560  369869 kubeadm.go:322] 
	I0229 02:36:06.724665  369869 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:36:06.724675  369869 kubeadm.go:322] 
	I0229 02:36:06.724767  369869 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token oueq3v.8ghuyl6sece1tffl \
	I0229 02:36:06.724923  369869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 
	I0229 02:36:06.724941  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:36:06.724952  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:36:06.726566  369869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:36:07.088398  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:09.587442  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:06.727880  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:36:06.786343  369869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:36:06.842349  369869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:36:06.842420  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=default-k8s-diff-port-071485 minikube.k8s.io/updated_at=2024_02_29T02_36_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:06.842428  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:07.196763  369869 ops.go:34] apiserver oom_adj: -16
	I0229 02:36:07.196958  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:07.696991  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:08.197336  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:08.697155  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:09.197955  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:09.697107  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:10.197816  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.085528  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:14.085852  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:10.697486  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:11.197744  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:11.697179  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.197614  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.697015  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:13.197983  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:13.697315  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:14.196982  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:14.698012  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:15.197896  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:15.697895  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:16.197062  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:16.697819  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:17.197222  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:17.697031  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.197683  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.697094  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.870924  369869 kubeadm.go:1088] duration metric: took 12.028572011s to wait for elevateKubeSystemPrivileges.
	I0229 02:36:18.870961  369869 kubeadm.go:406] StartCluster complete in 5m19.353203226s
	I0229 02:36:18.870986  369869 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:36:18.871077  369869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:36:18.873654  369869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:36:18.873954  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:36:18.874041  369869 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:36:18.874118  369869 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874130  369869 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874142  369869 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.874149  369869 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:36:18.874152  369869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-071485"
	I0229 02:36:18.874201  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.874256  369869 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:36:18.874341  369869 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874359  369869 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.874367  369869 addons.go:243] addon metrics-server should already be in state true
	I0229 02:36:18.874422  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.874613  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874637  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.874613  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874691  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.874811  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874846  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.892207  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0229 02:36:18.892260  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0229 02:36:18.892967  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.892986  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.893508  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.893528  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.893680  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.893700  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.893936  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.894102  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.894143  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0229 02:36:18.894331  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.894582  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.894594  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.894613  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.895109  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.895143  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.895508  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.896106  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.896142  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.898127  369869 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.898143  369869 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:36:18.898168  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.898482  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.898516  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.917303  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37069
	I0229 02:36:18.917472  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I0229 02:36:18.917747  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.917894  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.918493  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.918510  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.918654  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.918665  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.919012  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.919077  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.919229  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.919754  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.921030  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.922677  369869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:36:18.921622  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.923872  369869 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:36:18.923899  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:36:18.923919  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.925237  369869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:36:18.926153  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:36:18.924603  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0229 02:36:18.926269  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:36:18.926303  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.927739  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.928184  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.928277  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.928299  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.930032  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.930057  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.930386  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.930456  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.930614  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.930723  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.930914  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.931014  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:18.931133  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.931185  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.931533  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.931553  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.931576  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.931737  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.932033  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.932190  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:18.948311  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0229 02:36:18.949328  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.949793  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.949819  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.950313  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.950529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.952381  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.952660  369869 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:36:18.952673  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:36:18.952689  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.956332  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.956779  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.956808  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.957117  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.957313  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.957425  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.957485  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:19.128114  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:36:19.141619  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:36:19.141649  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:36:19.169945  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:36:19.187099  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:36:19.187124  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:36:19.211358  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:36:19.289856  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:36:19.289880  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:36:19.398720  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:36:19.414512  369869 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-071485" context rescaled to 1 replicas
	I0229 02:36:19.414562  369869 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.233 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:36:19.416389  369869 out.go:177] * Verifying Kubernetes components...
	I0229 02:36:15.586606  369508 pod_ready.go:81] duration metric: took 4m0.008250092s waiting for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	E0229 02:36:15.586638  369508 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:36:15.586648  369508 pod_ready.go:38] duration metric: took 4m5.573018241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:36:15.586669  369508 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:36:15.586707  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:15.586771  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:15.644937  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:15.644969  369508 cri.go:89] found id: ""
	I0229 02:36:15.644980  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:15.645054  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.653058  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:15.653137  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:15.709225  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:15.709254  369508 cri.go:89] found id: ""
	I0229 02:36:15.709264  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:15.709333  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.715304  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:15.715391  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:15.769593  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:15.769627  369508 cri.go:89] found id: ""
	I0229 02:36:15.769637  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:15.769702  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.775157  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:15.775230  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:15.820002  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:15.820030  369508 cri.go:89] found id: ""
	I0229 02:36:15.820040  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:15.820105  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.827058  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:15.827122  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:15.875030  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:15.875063  369508 cri.go:89] found id: ""
	I0229 02:36:15.875074  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:15.875142  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.880489  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:15.880555  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:15.929452  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:15.929476  369508 cri.go:89] found id: ""
	I0229 02:36:15.929484  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:15.929545  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.934321  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:15.934396  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:15.981960  369508 cri.go:89] found id: ""
	I0229 02:36:15.981997  369508 logs.go:276] 0 containers: []
	W0229 02:36:15.982006  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:15.982014  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:15.982077  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:16.034169  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:16.034196  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:16.034201  369508 cri.go:89] found id: ""
	I0229 02:36:16.034210  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:16.034281  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:16.039463  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:16.044719  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:16.044748  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:16.111048  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:16.111084  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:16.278784  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:16.278832  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:16.333048  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:16.333085  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:16.376514  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:16.376555  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:16.420840  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:16.420944  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:16.468273  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:16.468308  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:16.526001  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:16.526043  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:16.569084  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:16.569120  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:16.609818  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:16.609847  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:16.660979  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:16.661019  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:16.677397  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:16.677432  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:16.732421  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:16.732464  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:19.417788  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:36:21.277741  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.107753576s)
	I0229 02:36:21.277802  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.277815  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.277840  369869 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.066425449s)
	I0229 02:36:21.277873  369869 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 02:36:21.277840  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.149690589s)
	I0229 02:36:21.277908  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.277918  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278277  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.278323  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278331  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.278339  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.278351  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278445  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278458  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.278465  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.278474  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278519  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.278592  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278603  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.280452  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.280470  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.280482  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.300880  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.300907  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.301193  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.301217  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.572633  369869 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.154816183s)
	I0229 02:36:21.572676  369869 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-071485" to be "Ready" ...
	I0229 02:36:21.572635  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.173852857s)
	I0229 02:36:21.572814  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.572842  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.573153  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.573207  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.573215  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.573228  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.573236  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.573538  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.573575  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.573587  369869 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-071485"
	I0229 02:36:21.575111  369869 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:36:19.738493  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:36:19.758171  369508 api_server.go:72] duration metric: took 4m17.008228834s to wait for apiserver process to appear ...
	I0229 02:36:19.758199  369508 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:36:19.758281  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:19.758349  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:19.811042  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:19.811071  369508 cri.go:89] found id: ""
	I0229 02:36:19.811082  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:19.811145  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.817952  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:19.818034  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:19.871006  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:19.871033  369508 cri.go:89] found id: ""
	I0229 02:36:19.871043  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:19.871109  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.877440  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:19.877512  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:19.928043  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:19.928071  369508 cri.go:89] found id: ""
	I0229 02:36:19.928081  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:19.928142  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.935299  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:19.935363  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:19.977360  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:19.977391  369508 cri.go:89] found id: ""
	I0229 02:36:19.977402  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:19.977482  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.982361  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:19.982442  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:20.025903  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:20.025931  369508 cri.go:89] found id: ""
	I0229 02:36:20.025941  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:20.026012  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.031390  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:20.031477  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:20.080768  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:20.080792  369508 cri.go:89] found id: ""
	I0229 02:36:20.080800  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:20.080864  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.087322  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:20.087388  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:20.139067  369508 cri.go:89] found id: ""
	I0229 02:36:20.139111  369508 logs.go:276] 0 containers: []
	W0229 02:36:20.139124  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:20.139132  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:20.139195  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:20.193052  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:20.193085  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:20.193091  369508 cri.go:89] found id: ""
	I0229 02:36:20.193101  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:20.193174  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.199740  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.205385  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:20.205414  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:20.360843  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:20.360894  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:20.411077  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:20.411113  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:20.459855  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:20.459910  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:20.517056  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:20.517101  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:20.568151  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:20.568185  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:20.637131  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:20.637165  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:21.144933  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:21.144980  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:21.206565  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:21.206607  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:21.257071  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:21.257118  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:21.315541  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:21.315589  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:21.358630  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:21.358665  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:21.398170  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:21.398201  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:23.914059  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:36:23.923854  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0229 02:36:23.926443  369508 api_server.go:141] control plane version: v1.28.4
	I0229 02:36:23.926466  369508 api_server.go:131] duration metric: took 4.168260413s to wait for apiserver health ...
	I0229 02:36:23.926475  369508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:36:23.926506  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:23.926566  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:24.013825  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:24.013849  369508 cri.go:89] found id: ""
	I0229 02:36:24.013857  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:24.013913  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.019432  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:24.019506  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:24.078857  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:24.078877  369508 cri.go:89] found id: ""
	I0229 02:36:24.078885  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:24.078945  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.083761  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:24.083822  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:24.133681  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:24.133707  369508 cri.go:89] found id: ""
	I0229 02:36:24.133717  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:24.133779  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.139165  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:24.139228  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:24.185863  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:24.185883  369508 cri.go:89] found id: ""
	I0229 02:36:24.185892  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:24.185939  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.191094  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:24.191164  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:24.232922  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:24.232953  369508 cri.go:89] found id: ""
	I0229 02:36:24.232963  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:24.233031  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.238154  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:24.238252  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:24.280735  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:24.280760  369508 cri.go:89] found id: ""
	I0229 02:36:24.280769  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:24.280842  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.285497  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:24.285558  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:24.324979  369508 cri.go:89] found id: ""
	I0229 02:36:24.325007  369508 logs.go:276] 0 containers: []
	W0229 02:36:24.325016  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:24.325022  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:24.325085  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:24.370875  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:24.370908  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:24.370912  369508 cri.go:89] found id: ""
	I0229 02:36:24.370919  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:24.370973  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.378247  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.382856  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:24.382899  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:24.430889  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:24.430919  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:24.470370  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:24.470407  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:21.576300  369869 addons.go:505] enable addons completed in 2.702258052s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:36:21.582468  369869 node_ready.go:49] node "default-k8s-diff-port-071485" has status "Ready":"True"
	I0229 02:36:21.582494  369869 node_ready.go:38] duration metric: took 9.804213ms waiting for node "default-k8s-diff-port-071485" to be "Ready" ...
	I0229 02:36:21.582506  369869 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:36:21.608694  369869 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.125662  369869 pod_ready.go:92] pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.125695  369869 pod_ready.go:81] duration metric: took 1.51697387s waiting for pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.125707  369869 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.141831  369869 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.141855  369869 pod_ready.go:81] duration metric: took 16.140002ms waiting for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.141864  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.154216  369869 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.154261  369869 pod_ready.go:81] duration metric: took 12.389751ms waiting for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.154276  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.166057  369869 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.166085  369869 pod_ready.go:81] duration metric: took 11.798242ms waiting for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.166098  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gr44w" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.179414  369869 pod_ready.go:92] pod "kube-proxy-gr44w" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.179437  369869 pod_ready.go:81] duration metric: took 13.331411ms waiting for pod "kube-proxy-gr44w" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.179447  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.576569  369869 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.576597  369869 pod_ready.go:81] duration metric: took 397.142516ms waiting for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.576611  369869 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:21.953781  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:36:21.954431  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:21.954685  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:24.880947  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:24.880985  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:24.939045  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:24.939079  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:24.987109  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:24.987144  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:25.049095  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:25.049131  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:25.091654  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:25.091686  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:25.153281  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:25.153326  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:25.169544  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:25.169575  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:25.294469  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:25.294504  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:25.346867  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:25.346900  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:25.388876  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:25.388921  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:27.937848  369508 system_pods.go:59] 8 kube-system pods found
	I0229 02:36:27.937878  369508 system_pods.go:61] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running
	I0229 02:36:27.937883  369508 system_pods.go:61] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running
	I0229 02:36:27.937888  369508 system_pods.go:61] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running
	I0229 02:36:27.937891  369508 system_pods.go:61] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running
	I0229 02:36:27.937894  369508 system_pods.go:61] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:36:27.937898  369508 system_pods.go:61] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running
	I0229 02:36:27.937903  369508 system_pods.go:61] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:36:27.937908  369508 system_pods.go:61] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:36:27.937922  369508 system_pods.go:74] duration metric: took 4.011440564s to wait for pod list to return data ...
	I0229 02:36:27.937933  369508 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:36:27.940602  369508 default_sa.go:45] found service account: "default"
	I0229 02:36:27.940623  369508 default_sa.go:55] duration metric: took 2.681589ms for default service account to be created ...
	I0229 02:36:27.940632  369508 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:36:27.947433  369508 system_pods.go:86] 8 kube-system pods found
	I0229 02:36:27.947455  369508 system_pods.go:89] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running
	I0229 02:36:27.947466  369508 system_pods.go:89] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running
	I0229 02:36:27.947472  369508 system_pods.go:89] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running
	I0229 02:36:27.947482  369508 system_pods.go:89] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running
	I0229 02:36:27.947491  369508 system_pods.go:89] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:36:27.947497  369508 system_pods.go:89] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running
	I0229 02:36:27.947508  369508 system_pods.go:89] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:36:27.947518  369508 system_pods.go:89] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:36:27.947531  369508 system_pods.go:126] duration metric: took 6.892538ms to wait for k8s-apps to be running ...
	I0229 02:36:27.947539  369508 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:36:27.947591  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:36:27.965730  369508 system_svc.go:56] duration metric: took 18.181663ms WaitForService to wait for kubelet.
	I0229 02:36:27.965756  369508 kubeadm.go:581] duration metric: took 4m25.215820473s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:36:27.965780  369508 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:36:27.970094  369508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:36:27.970123  369508 node_conditions.go:123] node cpu capacity is 2
	I0229 02:36:27.970138  369508 node_conditions.go:105] duration metric: took 4.347423ms to run NodePressure ...
	I0229 02:36:27.970152  369508 start.go:228] waiting for startup goroutines ...
	I0229 02:36:27.970162  369508 start.go:233] waiting for cluster config update ...
	I0229 02:36:27.970175  369508 start.go:242] writing updated cluster config ...
	I0229 02:36:27.970529  369508 ssh_runner.go:195] Run: rm -f paused
	I0229 02:36:28.020686  369508 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:36:28.022730  369508 out.go:177] * Done! kubectl is now configured to use "embed-certs-915633" cluster and "default" namespace by default
	I0229 02:36:25.585985  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:28.085278  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:26.954801  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:26.955093  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:30.583462  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:32.584198  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:34.585129  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:37.085551  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:39.584450  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:36.955344  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:36.955543  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:41.585000  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:44.083919  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:46.085694  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:48.583474  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:50.584026  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:53.084622  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:55.084729  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:57.084941  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:59.586329  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:56.957911  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:56.958178  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:02.085189  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:04.085672  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:06.586906  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:09.085130  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:11.583811  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:13.585179  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:16.083670  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:18.084884  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:20.584395  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:22.585487  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:24.586088  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:26.586608  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:29.084644  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:31.585292  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:34.083690  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:36.959509  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:37:36.959795  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:36.959812  370051 kubeadm.go:322] 
	I0229 02:37:36.959848  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:37:36.959887  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:37:36.959893  370051 kubeadm.go:322] 
	I0229 02:37:36.959937  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:37:36.959991  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:37:36.960142  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:37:36.960167  370051 kubeadm.go:322] 
	I0229 02:37:36.960282  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:37:36.960318  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:37:36.960362  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:37:36.960371  370051 kubeadm.go:322] 
	I0229 02:37:36.960482  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:37:36.960617  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:37:36.960756  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:37:36.960839  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:37:36.960951  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:37:36.961015  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:37:36.961366  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:37:36.961507  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:37:36.961616  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 02:37:36.961763  370051 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:37:36.961835  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:37:37.427665  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:37:37.443045  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:37:37.456937  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:37:37.456979  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:37:37.529093  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:37:37.529246  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:37:37.670260  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:37:37.670417  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:37:37.670548  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:37:37.904220  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:37:37.905569  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:37:37.914919  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:37:38.070911  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:37:38.072738  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:37:38.072860  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:37:38.072951  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:37:38.073049  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:37:38.073132  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:37:38.073230  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:37:38.073299  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:37:38.073376  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:37:38.073458  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:37:38.073566  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:37:38.073680  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:37:38.073720  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:37:38.073794  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:37:38.209805  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:37:38.305550  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:37:38.464715  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:37:38.623139  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:37:38.624364  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:37:36.084556  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:38.086561  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:38.625883  370051 out.go:204]   - Booting up control plane ...
	I0229 02:37:38.626039  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:37:38.630668  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:37:38.631740  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:37:38.632687  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:37:38.636043  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:37:40.583589  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:42.583968  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:44.584409  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:46.586413  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:49.084223  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:51.584770  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:53.584871  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:55.585299  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:58.084753  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:00.584432  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:03.085511  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:05.585519  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:08.085774  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:10.087984  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:12.584744  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:15.085757  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:17.584807  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:19.588130  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:18.637746  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:38:18.638616  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:18.638883  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:22.084442  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:24.085227  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:23.639374  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:23.639613  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:26.087774  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:28.584872  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:30.587375  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:33.085060  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:35.086106  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:33.640169  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:33.640468  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:37.584670  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:40.085797  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:42.585365  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:44.587079  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:46.590638  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:49.086500  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:51.584286  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:53.587405  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:53.640871  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:53.641147  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:56.084551  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:58.085668  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:00.086247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:02.588854  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:05.085163  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:07.090885  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:09.583687  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:11.585184  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:14.085800  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:16.086643  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:18.584073  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:21.084992  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:23.585496  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:25.586111  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:28.086464  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:33.642813  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:39:33.643083  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:39:33.643099  370051 kubeadm.go:322] 
	I0229 02:39:33.643153  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:39:33.643206  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:39:33.643213  370051 kubeadm.go:322] 
	I0229 02:39:33.643252  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:39:33.643296  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:39:33.643443  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:39:33.643455  370051 kubeadm.go:322] 
	I0229 02:39:33.643605  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:39:33.643655  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:39:33.643700  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:39:33.643714  370051 kubeadm.go:322] 
	I0229 02:39:33.643871  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:39:33.644040  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:39:33.644193  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:39:33.644272  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:39:33.644371  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:39:33.644412  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:39:33.644855  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:39:33.644972  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:39:33.645065  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:39:33.645132  370051 kubeadm.go:406] StartCluster complete in 8m8.138449101s
	I0229 02:39:33.645178  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:39:33.645255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:39:33.699121  370051 cri.go:89] found id: ""
	I0229 02:39:33.699154  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.699166  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:39:33.699174  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:39:33.699240  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:39:33.747229  370051 cri.go:89] found id: ""
	I0229 02:39:33.747260  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.747272  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:39:33.747279  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:39:33.747349  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:39:33.789303  370051 cri.go:89] found id: ""
	I0229 02:39:33.789334  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.789343  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:39:33.789350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:39:33.789413  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:39:33.832769  370051 cri.go:89] found id: ""
	I0229 02:39:33.832801  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.832814  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:39:33.832824  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:39:33.832891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:39:33.881508  370051 cri.go:89] found id: ""
	I0229 02:39:33.881543  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.881554  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:39:33.881571  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:39:33.881635  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:39:33.941691  370051 cri.go:89] found id: ""
	I0229 02:39:33.941728  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.941740  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:39:33.941749  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:39:33.941822  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:39:33.990639  370051 cri.go:89] found id: ""
	I0229 02:39:33.990681  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.990704  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:39:33.990713  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:39:33.990774  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:39:34.038426  370051 cri.go:89] found id: ""
	I0229 02:39:34.038460  370051 logs.go:276] 0 containers: []
	W0229 02:39:34.038470  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:39:34.038480  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:39:34.038497  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:39:34.054571  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:39:34.054604  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:39:34.131297  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:39:34.131323  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:39:34.131337  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:39:34.232302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:39:34.232349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:39:34.283314  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:39:34.283351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:39:34.336858  370051 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:39:34.336920  370051 out.go:239] * 
	W0229 02:39:34.336985  370051 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.337006  370051 out.go:239] * 
	W0229 02:39:34.337787  370051 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:39:34.340744  370051 out.go:177] 
	W0229 02:39:34.342096  370051 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.342137  370051 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:39:34.342160  370051 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:39:34.343540  370051 out.go:177] 
	I0229 02:39:30.584963  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:32.585599  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:34.588073  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:37.085513  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:39.584721  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:41.585072  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:44.086996  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:46.587437  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:49.083819  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:51.084472  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:53.085522  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:55.585518  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:58.084454  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:00.085075  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:02.588500  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:05.083707  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:07.084423  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:09.584552  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:11.590611  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:14.084618  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:16.597479  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:19.086312  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:21.586450  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:23.583798  369869 pod_ready.go:81] duration metric: took 4m0.007166298s waiting for pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace to be "Ready" ...
	E0229 02:40:23.583824  369869 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:40:23.583834  369869 pod_ready.go:38] duration metric: took 4m2.001316522s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:40:23.583860  369869 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:40:23.583899  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:23.584002  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:23.655958  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:23.655987  369869 cri.go:89] found id: ""
	I0229 02:40:23.655997  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:23.656057  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.661125  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:23.661199  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:23.712373  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:23.712400  369869 cri.go:89] found id: ""
	I0229 02:40:23.712410  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:23.712508  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.718149  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:23.718209  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:23.775835  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:23.775858  369869 cri.go:89] found id: ""
	I0229 02:40:23.775867  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:23.775923  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.780698  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:23.780792  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:23.825914  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:23.825939  369869 cri.go:89] found id: ""
	I0229 02:40:23.825949  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:23.826017  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.830870  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:23.830932  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:23.868737  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:23.868767  369869 cri.go:89] found id: ""
	I0229 02:40:23.868777  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:23.868841  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.873522  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:23.873598  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:23.918640  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:23.918663  369869 cri.go:89] found id: ""
	I0229 02:40:23.918671  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:23.918725  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.923456  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:23.923517  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:23.963045  369869 cri.go:89] found id: ""
	I0229 02:40:23.963071  369869 logs.go:276] 0 containers: []
	W0229 02:40:23.963080  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:23.963085  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:23.963136  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:24.006380  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:24.006402  369869 cri.go:89] found id: ""
	I0229 02:40:24.006409  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:24.006459  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:24.012228  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:24.012269  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:24.095110  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:24.095354  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:24.117199  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:24.117229  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:24.181064  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:24.181126  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:24.239267  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:24.239305  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:24.283248  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:24.283281  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:24.746786  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:24.746831  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:24.764451  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:24.764487  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:24.917582  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:24.917625  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:24.980095  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:24.980142  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:25.028219  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:25.028253  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:25.083840  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:25.083874  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:25.131148  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:25.131179  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:25.179314  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:25.179340  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:25.179415  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:25.179432  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:25.179455  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:25.179471  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:25.179479  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:35.181209  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:40:35.199982  369869 api_server.go:72] duration metric: took 4m15.785374734s to wait for apiserver process to appear ...
	I0229 02:40:35.200012  369869 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:40:35.200052  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:35.200109  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:35.241760  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:35.241786  369869 cri.go:89] found id: ""
	I0229 02:40:35.241795  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:35.241846  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.247188  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:35.247294  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:35.293992  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:35.294022  369869 cri.go:89] found id: ""
	I0229 02:40:35.294033  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:35.294098  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.298900  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:35.298971  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:35.340809  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:35.340835  369869 cri.go:89] found id: ""
	I0229 02:40:35.340843  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:35.340903  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.345913  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:35.345988  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:35.392027  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:35.392061  369869 cri.go:89] found id: ""
	I0229 02:40:35.392072  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:35.392140  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.397043  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:35.397120  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:35.452900  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:35.452931  369869 cri.go:89] found id: ""
	I0229 02:40:35.452942  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:35.453014  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.459221  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:35.459303  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:35.503530  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:35.503555  369869 cri.go:89] found id: ""
	I0229 02:40:35.503563  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:35.503615  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.509021  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:35.509083  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:35.553777  369869 cri.go:89] found id: ""
	I0229 02:40:35.553803  369869 logs.go:276] 0 containers: []
	W0229 02:40:35.553812  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:35.553818  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:35.553868  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:35.605234  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:35.605259  369869 cri.go:89] found id: ""
	I0229 02:40:35.605267  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:35.605333  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.610433  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:35.610465  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:36.030757  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:36.030807  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:36.047193  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:36.047224  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:36.105936  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:36.105983  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:36.169028  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:36.169080  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:36.241640  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:36.241678  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:36.284787  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:36.284822  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:36.333264  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:36.333293  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:36.385436  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:36.385468  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:36.463289  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:36.463491  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:36.485748  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:36.485782  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:36.604181  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:36.604218  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:36.659210  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:36.659247  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:36.704612  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:36.704640  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:36.704695  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:36.704706  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:36.704712  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:36.704719  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:36.704726  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:46.705868  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:40:46.711301  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 200:
	ok
	I0229 02:40:46.713000  369869 api_server.go:141] control plane version: v1.28.4
	I0229 02:40:46.713025  369869 api_server.go:131] duration metric: took 11.513005073s to wait for apiserver health ...
	I0229 02:40:46.713034  369869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:40:46.713061  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:46.713121  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:46.759486  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:46.759505  369869 cri.go:89] found id: ""
	I0229 02:40:46.759517  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:46.759581  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.764215  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:46.764299  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:46.805016  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:46.805042  369869 cri.go:89] found id: ""
	I0229 02:40:46.805049  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:46.805113  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.810213  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:46.810284  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:46.862825  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:46.862855  369869 cri.go:89] found id: ""
	I0229 02:40:46.862867  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:46.862923  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.867531  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:46.867588  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:46.914211  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:46.914247  369869 cri.go:89] found id: ""
	I0229 02:40:46.914258  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:46.914327  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.918857  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:46.918921  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:46.959981  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:46.960016  369869 cri.go:89] found id: ""
	I0229 02:40:46.960027  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:46.960095  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.964789  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:46.964846  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:47.009289  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:47.009313  369869 cri.go:89] found id: ""
	I0229 02:40:47.009322  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:47.009390  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:47.015339  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:47.015413  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:47.059195  369869 cri.go:89] found id: ""
	I0229 02:40:47.059227  369869 logs.go:276] 0 containers: []
	W0229 02:40:47.059239  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:47.059254  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:47.059306  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:47.103293  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:47.103323  369869 cri.go:89] found id: ""
	I0229 02:40:47.103334  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:47.103401  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:47.108048  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:47.108076  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:47.157407  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:47.157441  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:47.591202  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:47.591261  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:47.644877  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:47.644910  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:47.784217  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:47.784249  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:47.839113  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:47.839144  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:47.885581  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:47.885616  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:47.930971  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:47.931009  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:47.986352  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:47.986437  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:48.067103  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:48.067316  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:48.088373  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:48.088407  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:48.105750  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:48.105781  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:48.154640  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:48.154677  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:48.196009  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:48.196042  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:48.196112  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:48.196128  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:48.196137  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:48.196146  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:48.196155  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:58.203822  369869 system_pods.go:59] 8 kube-system pods found
	I0229 02:40:58.203853  369869 system_pods.go:61] "coredns-5dd5756b68-xj4sh" [e2741c05-81b2-4de6-8329-f88912d48160] Running
	I0229 02:40:58.203859  369869 system_pods.go:61] "etcd-default-k8s-diff-port-071485" [88b0e865-c53e-4829-a56a-2a3b6e405df4] Running
	I0229 02:40:58.203866  369869 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071485" [445fa1c9-589b-437d-92ca-0d15ee8228ae] Running
	I0229 02:40:58.203872  369869 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071485" [e3f60cdb-6214-4987-b692-a4921ece3895] Running
	I0229 02:40:58.203877  369869 system_pods.go:61] "kube-proxy-gr44w" [a74b553f-683a-4e1b-ac48-b4553d00b306] Running
	I0229 02:40:58.203881  369869 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071485" [4c1afe85-10be-45e5-8b99-6bd3cf12a828] Running
	I0229 02:40:58.203888  369869 system_pods.go:61] "metrics-server-57f55c9bc5-fpwzl" [5215d27e-4bf2-4331-89f2-24096dc96b90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:40:58.203893  369869 system_pods.go:61] "storage-provisioner" [d7b70f8e-1689-4526-a39f-eb8005cbecd2] Running
	I0229 02:40:58.203902  369869 system_pods.go:74] duration metric: took 11.49086169s to wait for pod list to return data ...
	I0229 02:40:58.203913  369869 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:40:58.207120  369869 default_sa.go:45] found service account: "default"
	I0229 02:40:58.207145  369869 default_sa.go:55] duration metric: took 3.22533ms for default service account to be created ...
	I0229 02:40:58.207154  369869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:40:58.213026  369869 system_pods.go:86] 8 kube-system pods found
	I0229 02:40:58.213056  369869 system_pods.go:89] "coredns-5dd5756b68-xj4sh" [e2741c05-81b2-4de6-8329-f88912d48160] Running
	I0229 02:40:58.213065  369869 system_pods.go:89] "etcd-default-k8s-diff-port-071485" [88b0e865-c53e-4829-a56a-2a3b6e405df4] Running
	I0229 02:40:58.213073  369869 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071485" [445fa1c9-589b-437d-92ca-0d15ee8228ae] Running
	I0229 02:40:58.213081  369869 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071485" [e3f60cdb-6214-4987-b692-a4921ece3895] Running
	I0229 02:40:58.213088  369869 system_pods.go:89] "kube-proxy-gr44w" [a74b553f-683a-4e1b-ac48-b4553d00b306] Running
	I0229 02:40:58.213094  369869 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071485" [4c1afe85-10be-45e5-8b99-6bd3cf12a828] Running
	I0229 02:40:58.213107  369869 system_pods.go:89] "metrics-server-57f55c9bc5-fpwzl" [5215d27e-4bf2-4331-89f2-24096dc96b90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:40:58.213117  369869 system_pods.go:89] "storage-provisioner" [d7b70f8e-1689-4526-a39f-eb8005cbecd2] Running
	I0229 02:40:58.213130  369869 system_pods.go:126] duration metric: took 5.970128ms to wait for k8s-apps to be running ...
	I0229 02:40:58.213142  369869 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:40:58.213204  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:40:58.230150  369869 system_svc.go:56] duration metric: took 16.998299ms WaitForService to wait for kubelet.
	I0229 02:40:58.230178  369869 kubeadm.go:581] duration metric: took 4m38.815578079s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:40:58.230245  369869 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:40:58.233660  369869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:40:58.233719  369869 node_conditions.go:123] node cpu capacity is 2
	I0229 02:40:58.233737  369869 node_conditions.go:105] duration metric: took 3.486117ms to run NodePressure ...
	I0229 02:40:58.233756  369869 start.go:228] waiting for startup goroutines ...
	I0229 02:40:58.233766  369869 start.go:233] waiting for cluster config update ...
	I0229 02:40:58.233777  369869 start.go:242] writing updated cluster config ...
	I0229 02:40:58.234079  369869 ssh_runner.go:195] Run: rm -f paused
	I0229 02:40:58.285415  369869 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:40:58.287433  369869 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-071485" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.465707910Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175000465684681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cefeed28-d158-485e-96a4-fed9c2637545 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.466134383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bedf26e0-ffe2-4e6c-a0d4-4357580c9eeb name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.466213621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bedf26e0-ffe2-4e6c-a0d4-4357580c9eeb name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.466371296Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02,PodSandboxId:0c48a66d310655ab2f44cf0fba1ed5662cd89fa93594cb4a45127f109c5609bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709174181818261973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b70f8e-1689-4526-a39f-eb8005cbecd2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee800f2,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694,PodSandboxId:48350020b0e2cc4ab209e343d9e15a1d5fdd06f201a07de267e4321a1bd3f5e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709174181924034288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj4sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2741c05-81b2-4de6-8329-f88912d48160,},Annotations:map[string]string{io.kubernetes.container.hash: 9e732771,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f,PodSandboxId:1a33a191dbe670137e358519d3834e0805f639b17a9a0eca4260511d90a80c2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709174180078109598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gr44w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: a74b553f-683a-4e1b-ac48-b4553d00b306,},Annotations:map[string]string{io.kubernetes.container.hash: ec9d29f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861,PodSandboxId:f6414d4bee4631d262ba32af82ea34f65134b75fe5f17d498b5119a6ef282f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709174159921499699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39250f8b415d6b029a5f
20f6b03dea1,},Annotations:map[string]string{io.kubernetes.container.hash: 716a6c18,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2,PodSandboxId:b53822c7895d82ab99052b40e726b36e52b2b7ec65f4ca2884055d4f5c2eec67,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709174159899484093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79083cc52dd4c23bb4518dc
44bebac51,},Annotations:map[string]string{io.kubernetes.container.hash: 92573f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9,PodSandboxId:40615d7a1d3d9dc3f0603d3d2355c82e26433a92959d54f111a82e2049cdabd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709174159830650625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4bbfe260589851d71a917f7ab33efd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349,PodSandboxId:330b39a8726b0e3e8f2afacbb2e6d86b892fdb221c43c3052ee63edee8cd8125,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709174159820853035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9b278231d08a8a1a33579d6513f231fd,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bedf26e0-ffe2-4e6c-a0d4-4357580c9eeb name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.511496477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a08ad6d-8da3-4a08-ba05-115ac0c1346f name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.511645442Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a08ad6d-8da3-4a08-ba05-115ac0c1346f name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.512940902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30ebd3e3-65c6-47d7-9042-0cca7408eddf name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.513789049Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175000513762246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30ebd3e3-65c6-47d7-9042-0cca7408eddf name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.514296778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=144b6a77-7e2a-4f9c-aa42-177b665aedaa name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.514376377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=144b6a77-7e2a-4f9c-aa42-177b665aedaa name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.514632233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02,PodSandboxId:0c48a66d310655ab2f44cf0fba1ed5662cd89fa93594cb4a45127f109c5609bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709174181818261973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b70f8e-1689-4526-a39f-eb8005cbecd2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee800f2,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694,PodSandboxId:48350020b0e2cc4ab209e343d9e15a1d5fdd06f201a07de267e4321a1bd3f5e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709174181924034288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj4sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2741c05-81b2-4de6-8329-f88912d48160,},Annotations:map[string]string{io.kubernetes.container.hash: 9e732771,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f,PodSandboxId:1a33a191dbe670137e358519d3834e0805f639b17a9a0eca4260511d90a80c2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709174180078109598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gr44w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: a74b553f-683a-4e1b-ac48-b4553d00b306,},Annotations:map[string]string{io.kubernetes.container.hash: ec9d29f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861,PodSandboxId:f6414d4bee4631d262ba32af82ea34f65134b75fe5f17d498b5119a6ef282f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709174159921499699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39250f8b415d6b029a5f
20f6b03dea1,},Annotations:map[string]string{io.kubernetes.container.hash: 716a6c18,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2,PodSandboxId:b53822c7895d82ab99052b40e726b36e52b2b7ec65f4ca2884055d4f5c2eec67,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709174159899484093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79083cc52dd4c23bb4518dc
44bebac51,},Annotations:map[string]string{io.kubernetes.container.hash: 92573f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9,PodSandboxId:40615d7a1d3d9dc3f0603d3d2355c82e26433a92959d54f111a82e2049cdabd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709174159830650625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4bbfe260589851d71a917f7ab33efd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349,PodSandboxId:330b39a8726b0e3e8f2afacbb2e6d86b892fdb221c43c3052ee63edee8cd8125,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709174159820853035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9b278231d08a8a1a33579d6513f231fd,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=144b6a77-7e2a-4f9c-aa42-177b665aedaa name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.556392413Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1afd9c11-f3b3-4fe4-98a5-4e07152f68c0 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.556485988Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1afd9c11-f3b3-4fe4-98a5-4e07152f68c0 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.557695941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a605cc0a-0282-46bd-9670-400e18ae1cd4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.558110594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175000558090334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a605cc0a-0282-46bd-9670-400e18ae1cd4 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.558637402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1de7862-242c-4b21-ad1e-202efecf557d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.558714752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1de7862-242c-4b21-ad1e-202efecf557d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.559222707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02,PodSandboxId:0c48a66d310655ab2f44cf0fba1ed5662cd89fa93594cb4a45127f109c5609bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709174181818261973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b70f8e-1689-4526-a39f-eb8005cbecd2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee800f2,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694,PodSandboxId:48350020b0e2cc4ab209e343d9e15a1d5fdd06f201a07de267e4321a1bd3f5e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709174181924034288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj4sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2741c05-81b2-4de6-8329-f88912d48160,},Annotations:map[string]string{io.kubernetes.container.hash: 9e732771,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f,PodSandboxId:1a33a191dbe670137e358519d3834e0805f639b17a9a0eca4260511d90a80c2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709174180078109598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gr44w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: a74b553f-683a-4e1b-ac48-b4553d00b306,},Annotations:map[string]string{io.kubernetes.container.hash: ec9d29f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861,PodSandboxId:f6414d4bee4631d262ba32af82ea34f65134b75fe5f17d498b5119a6ef282f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709174159921499699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39250f8b415d6b029a5f
20f6b03dea1,},Annotations:map[string]string{io.kubernetes.container.hash: 716a6c18,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2,PodSandboxId:b53822c7895d82ab99052b40e726b36e52b2b7ec65f4ca2884055d4f5c2eec67,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709174159899484093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79083cc52dd4c23bb4518dc
44bebac51,},Annotations:map[string]string{io.kubernetes.container.hash: 92573f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9,PodSandboxId:40615d7a1d3d9dc3f0603d3d2355c82e26433a92959d54f111a82e2049cdabd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709174159830650625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4bbfe260589851d71a917f7ab33efd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349,PodSandboxId:330b39a8726b0e3e8f2afacbb2e6d86b892fdb221c43c3052ee63edee8cd8125,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709174159820853035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9b278231d08a8a1a33579d6513f231fd,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1de7862-242c-4b21-ad1e-202efecf557d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.595339668Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca851920-f48a-4f25-a807-8f676d671394 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.595435547Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca851920-f48a-4f25-a807-8f676d671394 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.596456484Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9e3ece0-4c91-46dd-b50d-2136d0b8a987 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.596941590Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175000596917539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9e3ece0-4c91-46dd-b50d-2136d0b8a987 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.597652130Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53957e83-49ed-44c4-98b5-ac2a191bb69b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.597736194Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53957e83-49ed-44c4-98b5-ac2a191bb69b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:00 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:50:00.597984855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02,PodSandboxId:0c48a66d310655ab2f44cf0fba1ed5662cd89fa93594cb4a45127f109c5609bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709174181818261973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b70f8e-1689-4526-a39f-eb8005cbecd2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee800f2,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694,PodSandboxId:48350020b0e2cc4ab209e343d9e15a1d5fdd06f201a07de267e4321a1bd3f5e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709174181924034288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj4sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2741c05-81b2-4de6-8329-f88912d48160,},Annotations:map[string]string{io.kubernetes.container.hash: 9e732771,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f,PodSandboxId:1a33a191dbe670137e358519d3834e0805f639b17a9a0eca4260511d90a80c2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709174180078109598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gr44w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: a74b553f-683a-4e1b-ac48-b4553d00b306,},Annotations:map[string]string{io.kubernetes.container.hash: ec9d29f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861,PodSandboxId:f6414d4bee4631d262ba32af82ea34f65134b75fe5f17d498b5119a6ef282f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709174159921499699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39250f8b415d6b029a5f
20f6b03dea1,},Annotations:map[string]string{io.kubernetes.container.hash: 716a6c18,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2,PodSandboxId:b53822c7895d82ab99052b40e726b36e52b2b7ec65f4ca2884055d4f5c2eec67,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709174159899484093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79083cc52dd4c23bb4518dc
44bebac51,},Annotations:map[string]string{io.kubernetes.container.hash: 92573f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9,PodSandboxId:40615d7a1d3d9dc3f0603d3d2355c82e26433a92959d54f111a82e2049cdabd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709174159830650625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4bbfe260589851d71a917f7ab33efd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349,PodSandboxId:330b39a8726b0e3e8f2afacbb2e6d86b892fdb221c43c3052ee63edee8cd8125,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709174159820853035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9b278231d08a8a1a33579d6513f231fd,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53957e83-49ed-44c4-98b5-ac2a191bb69b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	450ceac543af8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   48350020b0e2c       coredns-5dd5756b68-xj4sh
	01b4801ac4a5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   0c48a66d31065       storage-provisioner
	44fe677f15041       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   1a33a191dbe67       kube-proxy-gr44w
	da1b959c6cfcf       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   f6414d4bee463       etcd-default-k8s-diff-port-071485
	f33d63f6603f7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Running             kube-apiserver            2                   b53822c7895d8       kube-apiserver-default-k8s-diff-port-071485
	817abd6ec8c85       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   14 minutes ago      Running             kube-controller-manager   2                   40615d7a1d3d9       kube-controller-manager-default-k8s-diff-port-071485
	15b0755a43227       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   14 minutes ago      Running             kube-scheduler            2                   330b39a8726b0       kube-scheduler-default-k8s-diff-port-071485
	
	
	==> coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54411 - 24465 "HINFO IN 7655657684021901365.2426359110297695895. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012429203s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-071485
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-071485
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=default-k8s-diff-port-071485
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_36_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:36:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-071485
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:49:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:46:38 +0000   Thu, 29 Feb 2024 02:36:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:46:38 +0000   Thu, 29 Feb 2024 02:36:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:46:38 +0000   Thu, 29 Feb 2024 02:36:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:46:38 +0000   Thu, 29 Feb 2024 02:36:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.233
	  Hostname:    default-k8s-diff-port-071485
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 caf3cd82fa1947558241624e74122209
	  System UUID:                caf3cd82-fa19-4755-8241-624e74122209
	  Boot ID:                    cd093dea-45bb-4a34-bcff-e5ce0ba51ed6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-xj4sh                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-default-k8s-diff-port-071485                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-default-k8s-diff-port-071485             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-071485    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-gr44w                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-diff-port-071485             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-fpwzl                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node default-k8s-diff-port-071485 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node default-k8s-diff-port-071485 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node default-k8s-diff-port-071485 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m   node-controller  Node default-k8s-diff-port-071485 event: Registered Node default-k8s-diff-port-071485 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053900] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044548] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.614521] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.472404] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.776477] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.106254] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.061046] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075778] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.207667] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.144426] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.273225] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[Feb29 02:31] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.062327] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.681483] kauditd_printk_skb: 72 callbacks suppressed
	[  +8.113446] kauditd_printk_skb: 69 callbacks suppressed
	[ +22.882961] kauditd_printk_skb: 1 callbacks suppressed
	[Feb29 02:35] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.740882] systemd-fstab-generator[3393]: Ignoring "noauto" option for root device
	[Feb29 02:36] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.165590] systemd-fstab-generator[3718]: Ignoring "noauto" option for root device
	[ +13.611093] kauditd_printk_skb: 14 callbacks suppressed
	[Feb29 02:37] kauditd_printk_skb: 45 callbacks suppressed
	
	
	==> etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] <==
	{"level":"info","ts":"2024-02-29T02:36:00.488776Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"4caceb90632e0222","initial-advertise-peer-urls":["https://192.168.61.233:2380"],"listen-peer-urls":["https://192.168.61.233:2380"],"advertise-client-urls":["https://192.168.61.233:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.233:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T02:36:00.490582Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T02:36:00.487758Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.233:2380"}
	{"level":"info","ts":"2024-02-29T02:36:00.491616Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.233:2380"}
	{"level":"info","ts":"2024-02-29T02:36:00.518827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4caceb90632e0222 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-29T02:36:00.519042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4caceb90632e0222 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-29T02:36:00.51917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4caceb90632e0222 received MsgPreVoteResp from 4caceb90632e0222 at term 1"}
	{"level":"info","ts":"2024-02-29T02:36:00.519283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4caceb90632e0222 became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:36:00.519317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4caceb90632e0222 received MsgVoteResp from 4caceb90632e0222 at term 2"}
	{"level":"info","ts":"2024-02-29T02:36:00.519427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4caceb90632e0222 became leader at term 2"}
	{"level":"info","ts":"2024-02-29T02:36:00.519437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4caceb90632e0222 elected leader 4caceb90632e0222 at term 2"}
	{"level":"info","ts":"2024-02-29T02:36:00.52765Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:36:00.532116Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"4caceb90632e0222","local-member-attributes":"{Name:default-k8s-diff-port-071485 ClientURLs:[https://192.168.61.233:2379]}","request-path":"/0/members/4caceb90632e0222/attributes","cluster-id":"bb00245d0a15f92c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:36:00.532623Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:36:00.532789Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:36:00.5372Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.233:2379"}
	{"level":"info","ts":"2024-02-29T02:36:00.541011Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb00245d0a15f92c","local-member-id":"4caceb90632e0222","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:36:00.542293Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:36:00.54562Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:36:00.543643Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:36:00.545693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:36:00.542169Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:46:01.416105Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":668}
	{"level":"info","ts":"2024-02-29T02:46:01.418437Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":668,"took":"1.981671ms","hash":1578095446}
	{"level":"info","ts":"2024-02-29T02:46:01.418488Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1578095446,"revision":668,"compact-revision":-1}
	
	
	==> kernel <==
	 02:50:00 up 19 min,  0 users,  load average: 0.11, 0.12, 0.16
	Linux default-k8s-diff-port-071485 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] <==
	I0229 02:46:03.194155       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 02:46:04.194457       1 handler_proxy.go:93] no RequestInfo found in the context
	W0229 02:46:04.194515       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:46:04.194730       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:46:04.194738       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0229 02:46:04.194630       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:46:04.195776       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 02:47:03.077268       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 02:47:04.194957       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:47:04.195125       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:47:04.195159       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:47:04.196113       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:47:04.196164       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:47:04.196171       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 02:48:03.077020       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 02:49:03.076728       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 02:49:04.195783       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:49:04.195920       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:49:04.195928       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:49:04.197153       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:49:04.197214       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:49:04.197222       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] <==
	I0229 02:44:19.466014       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:44:48.958411       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:44:49.475463       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:45:18.964663       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:45:19.486913       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:45:48.970478       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:45:49.496249       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:46:18.976902       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:46:19.507092       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:46:48.986614       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:46:49.518971       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:47:18.994503       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:47:19.528679       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 02:47:23.821952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="406.264µs"
	I0229 02:47:36.819799       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="1.011358ms"
	E0229 02:47:49.001136       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:47:49.538064       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:48:19.008023       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:48:19.546874       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:48:49.013846       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:48:49.556734       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:49:19.022314       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:49:19.566271       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:49:49.029901       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:49:49.575877       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] <==
	I0229 02:36:20.717822       1 server_others.go:69] "Using iptables proxy"
	I0229 02:36:20.764509       1 node.go:141] Successfully retrieved node IP: 192.168.61.233
	I0229 02:36:20.863431       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 02:36:20.863506       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:36:20.869927       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:36:20.871001       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:36:20.871351       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:36:20.871399       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:36:20.874077       1 config.go:188] "Starting service config controller"
	I0229 02:36:20.877917       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:36:20.877988       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:36:20.877995       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:36:20.879324       1 config.go:315] "Starting node config controller"
	I0229 02:36:20.879359       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:36:20.979473       1 shared_informer.go:318] Caches are synced for node config
	I0229 02:36:20.979666       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:36:20.979673       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] <==
	W0229 02:36:03.256406       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 02:36:03.256459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 02:36:03.256612       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 02:36:03.256683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 02:36:03.256793       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 02:36:03.257010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 02:36:03.257123       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 02:36:03.257212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 02:36:04.125141       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 02:36:04.125247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 02:36:04.175213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 02:36:04.176085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 02:36:04.198276       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 02:36:04.198353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 02:36:04.244700       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 02:36:04.244956       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 02:36:04.244730       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 02:36:04.245170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 02:36:04.371975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 02:36:04.372400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 02:36:04.438852       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 02:36:04.439123       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 02:36:04.451147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 02:36:04.451256       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0229 02:36:04.831765       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 02:47:10 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:47:10.822160    3725 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 29 02:47:10 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:47:10.822416    3725 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9lpjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-fpwzl_kube-system(5215d27e-4bf2-4331-89f2-24096dc96b90): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 29 02:47:10 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:47:10.822492    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:47:23 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:47:23.803658    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:47:36 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:47:36.802644    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:47:50 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:47:50.806481    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:48:04 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:48:04.801870    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:48:06 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:48:06.854521    3725 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:48:06 default-k8s-diff-port-071485 kubelet[3725]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:48:06 default-k8s-diff-port-071485 kubelet[3725]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:48:06 default-k8s-diff-port-071485 kubelet[3725]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:48:06 default-k8s-diff-port-071485 kubelet[3725]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:48:17 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:48:17.802462    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:48:30 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:48:30.803672    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:48:42 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:48:42.802031    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:48:57 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:48:57.803423    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:49:06 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:49:06.853970    3725 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:49:06 default-k8s-diff-port-071485 kubelet[3725]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:49:06 default-k8s-diff-port-071485 kubelet[3725]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:49:06 default-k8s-diff-port-071485 kubelet[3725]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:49:06 default-k8s-diff-port-071485 kubelet[3725]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:49:08 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:49:08.803847    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:49:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:49:20.805316    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:49:32 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:49:32.804483    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:49:47 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:49:47.802370    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	
	
	==> storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] <==
	I0229 02:36:22.114406       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 02:36:22.128623       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 02:36:22.128719       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 02:36:22.149465       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 02:36:22.149866       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-071485_dd8f6f8e-7f5f-4044-a47a-0ebc4a263fbb!
	I0229 02:36:22.149999       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e29c156f-0443-4041-ad13-643b9c57e32c", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-071485_dd8f6f8e-7f5f-4044-a47a-0ebc4a263fbb became leader
	I0229 02:36:22.252426       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-071485_dd8f6f8e-7f5f-4044-a47a-0ebc4a263fbb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071485 -n default-k8s-diff-port-071485
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-071485 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-fpwzl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-071485 describe pod metrics-server-57f55c9bc5-fpwzl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-071485 describe pod metrics-server-57f55c9bc5-fpwzl: exit status 1 (68.393755ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-fpwzl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-071485 describe pod metrics-server-57f55c9bc5-fpwzl: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (373.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-247751 -n no-preload-247751
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-02-29 02:50:50.983236478 +0000 UTC m=+6008.676383401
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-247751 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-247751 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.656µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-247751 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247751 -n no-preload-247751
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-247751 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-247751 logs -n 25: (1.423296279s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo find                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo crio                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-117441                                       | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	| delete  | -p                                                     | disable-driver-mounts-542968 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | disable-driver-mounts-542968                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:23 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-915633            | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247751             | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071485  | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275488        | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-915633                 | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247751                  | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071485       | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:40 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275488             | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:50 UTC | 29 Feb 24 02:50 UTC |
	| start   | -p newest-cni-052502 --memory=2200 --alsologtostderr   | newest-cni-052502            | jenkins | v1.32.0 | 29 Feb 24 02:50 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:50:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:50:12.727717  374821 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:50:12.727853  374821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:50:12.727862  374821 out.go:304] Setting ErrFile to fd 2...
	I0229 02:50:12.727866  374821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:50:12.728168  374821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:50:12.728877  374821 out.go:298] Setting JSON to false
	I0229 02:50:12.730123  374821 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9156,"bootTime":1709165857,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:50:12.730192  374821 start.go:139] virtualization: kvm guest
	I0229 02:50:12.732558  374821 out.go:177] * [newest-cni-052502] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:50:12.734156  374821 notify.go:220] Checking for updates...
	I0229 02:50:12.734284  374821 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:50:12.735665  374821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:50:12.736995  374821 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:50:12.738137  374821 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:50:12.739318  374821 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:50:12.740496  374821 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:50:12.742025  374821 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:50:12.742116  374821 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:50:12.742205  374821 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:50:12.742379  374821 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:50:12.780367  374821 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 02:50:12.781749  374821 start.go:299] selected driver: kvm2
	I0229 02:50:12.781767  374821 start.go:903] validating driver "kvm2" against <nil>
	I0229 02:50:12.781779  374821 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:50:12.782707  374821 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:50:12.782790  374821 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:50:12.798844  374821 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:50:12.798890  374821 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W0229 02:50:12.798932  374821 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0229 02:50:12.799164  374821 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0229 02:50:12.799237  374821 cni.go:84] Creating CNI manager for ""
	I0229 02:50:12.799250  374821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:50:12.799260  374821 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 02:50:12.799269  374821 start_flags.go:323] config:
	{Name:newest-cni-052502 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-052502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:50:12.799397  374821 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:50:12.801179  374821 out.go:177] * Starting control plane node newest-cni-052502 in cluster newest-cni-052502
	I0229 02:50:12.802306  374821 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:50:12.802357  374821 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0229 02:50:12.802370  374821 cache.go:56] Caching tarball of preloaded images
	I0229 02:50:12.802472  374821 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 02:50:12.802487  374821 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0229 02:50:12.802621  374821 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/config.json ...
	I0229 02:50:12.802652  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/config.json: {Name:mk79971e208d4ada52b1d140a2faac7d49ee77fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:12.802837  374821 start.go:365] acquiring machines lock for newest-cni-052502: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:50:12.802885  374821 start.go:369] acquired machines lock for "newest-cni-052502" in 26.531µs
	I0229 02:50:12.802909  374821 start.go:93] Provisioning new machine with config: &{Name:newest-cni-052502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-052502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:50:12.802977  374821 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 02:50:12.804437  374821 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:50:12.804600  374821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:50:12.804646  374821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:50:12.819091  374821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42711
	I0229 02:50:12.819564  374821 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:50:12.820137  374821 main.go:141] libmachine: Using API Version  1
	I0229 02:50:12.820159  374821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:50:12.820539  374821 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:50:12.820756  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetMachineName
	I0229 02:50:12.820911  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:12.821094  374821 start.go:159] libmachine.API.Create for "newest-cni-052502" (driver="kvm2")
	I0229 02:50:12.821138  374821 client.go:168] LocalClient.Create starting
	I0229 02:50:12.821196  374821 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem
	I0229 02:50:12.821236  374821 main.go:141] libmachine: Decoding PEM data...
	I0229 02:50:12.821253  374821 main.go:141] libmachine: Parsing certificate...
	I0229 02:50:12.821307  374821 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem
	I0229 02:50:12.821326  374821 main.go:141] libmachine: Decoding PEM data...
	I0229 02:50:12.821336  374821 main.go:141] libmachine: Parsing certificate...
	I0229 02:50:12.821352  374821 main.go:141] libmachine: Running pre-create checks...
	I0229 02:50:12.821359  374821 main.go:141] libmachine: (newest-cni-052502) Calling .PreCreateCheck
	I0229 02:50:12.821754  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetConfigRaw
	I0229 02:50:12.822307  374821 main.go:141] libmachine: Creating machine...
	I0229 02:50:12.822324  374821 main.go:141] libmachine: (newest-cni-052502) Calling .Create
	I0229 02:50:12.822496  374821 main.go:141] libmachine: (newest-cni-052502) Creating KVM machine...
	I0229 02:50:12.823758  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found existing default KVM network
	I0229 02:50:12.825880  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:12.825699  374844 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0e0}
	I0229 02:50:12.831015  374821 main.go:141] libmachine: (newest-cni-052502) DBG | trying to create private KVM network mk-newest-cni-052502 192.168.39.0/24...
	I0229 02:50:12.906201  374821 main.go:141] libmachine: (newest-cni-052502) DBG | private KVM network mk-newest-cni-052502 192.168.39.0/24 created
	I0229 02:50:12.906269  374821 main.go:141] libmachine: (newest-cni-052502) Setting up store path in /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502 ...
	I0229 02:50:12.906289  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:12.906170  374844 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:50:12.906308  374821 main.go:141] libmachine: (newest-cni-052502) Building disk image from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 02:50:12.906490  374821 main.go:141] libmachine: (newest-cni-052502) Downloading /home/jenkins/minikube-integration/18063-316644/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:50:13.169595  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:13.169451  374844 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa...
	I0229 02:50:13.316334  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:13.316217  374844 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/newest-cni-052502.rawdisk...
	I0229 02:50:13.316385  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Writing magic tar header
	I0229 02:50:13.316407  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Writing SSH key tar header
	I0229 02:50:13.316509  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:13.316428  374844 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502 ...
	I0229 02:50:13.316567  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502
	I0229 02:50:13.316595  374821 main.go:141] libmachine: (newest-cni-052502) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502 (perms=drwx------)
	I0229 02:50:13.316606  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines
	I0229 02:50:13.316621  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:50:13.316630  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644
	I0229 02:50:13.316640  374821 main.go:141] libmachine: (newest-cni-052502) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines (perms=drwxr-xr-x)
	I0229 02:50:13.316655  374821 main.go:141] libmachine: (newest-cni-052502) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube (perms=drwxr-xr-x)
	I0229 02:50:13.316668  374821 main.go:141] libmachine: (newest-cni-052502) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644 (perms=drwxrwxr-x)
	I0229 02:50:13.316694  374821 main.go:141] libmachine: (newest-cni-052502) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 02:50:13.316705  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 02:50:13.316713  374821 main.go:141] libmachine: (newest-cni-052502) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 02:50:13.316725  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home/jenkins
	I0229 02:50:13.316735  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home
	I0229 02:50:13.316744  374821 main.go:141] libmachine: (newest-cni-052502) Creating domain...
	I0229 02:50:13.316757  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Skipping /home - not owner
	I0229 02:50:13.318576  374821 main.go:141] libmachine: (newest-cni-052502) define libvirt domain using xml: 
	I0229 02:50:13.318599  374821 main.go:141] libmachine: (newest-cni-052502) <domain type='kvm'>
	I0229 02:50:13.318615  374821 main.go:141] libmachine: (newest-cni-052502)   <name>newest-cni-052502</name>
	I0229 02:50:13.318623  374821 main.go:141] libmachine: (newest-cni-052502)   <memory unit='MiB'>2200</memory>
	I0229 02:50:13.318632  374821 main.go:141] libmachine: (newest-cni-052502)   <vcpu>2</vcpu>
	I0229 02:50:13.318640  374821 main.go:141] libmachine: (newest-cni-052502)   <features>
	I0229 02:50:13.318647  374821 main.go:141] libmachine: (newest-cni-052502)     <acpi/>
	I0229 02:50:13.318654  374821 main.go:141] libmachine: (newest-cni-052502)     <apic/>
	I0229 02:50:13.318674  374821 main.go:141] libmachine: (newest-cni-052502)     <pae/>
	I0229 02:50:13.318687  374821 main.go:141] libmachine: (newest-cni-052502)     
	I0229 02:50:13.318694  374821 main.go:141] libmachine: (newest-cni-052502)   </features>
	I0229 02:50:13.318703  374821 main.go:141] libmachine: (newest-cni-052502)   <cpu mode='host-passthrough'>
	I0229 02:50:13.318711  374821 main.go:141] libmachine: (newest-cni-052502)   
	I0229 02:50:13.318717  374821 main.go:141] libmachine: (newest-cni-052502)   </cpu>
	I0229 02:50:13.318724  374821 main.go:141] libmachine: (newest-cni-052502)   <os>
	I0229 02:50:13.318730  374821 main.go:141] libmachine: (newest-cni-052502)     <type>hvm</type>
	I0229 02:50:13.318739  374821 main.go:141] libmachine: (newest-cni-052502)     <boot dev='cdrom'/>
	I0229 02:50:13.318746  374821 main.go:141] libmachine: (newest-cni-052502)     <boot dev='hd'/>
	I0229 02:50:13.318755  374821 main.go:141] libmachine: (newest-cni-052502)     <bootmenu enable='no'/>
	I0229 02:50:13.318761  374821 main.go:141] libmachine: (newest-cni-052502)   </os>
	I0229 02:50:13.318771  374821 main.go:141] libmachine: (newest-cni-052502)   <devices>
	I0229 02:50:13.318779  374821 main.go:141] libmachine: (newest-cni-052502)     <disk type='file' device='cdrom'>
	I0229 02:50:13.318800  374821 main.go:141] libmachine: (newest-cni-052502)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/boot2docker.iso'/>
	I0229 02:50:13.318808  374821 main.go:141] libmachine: (newest-cni-052502)       <target dev='hdc' bus='scsi'/>
	I0229 02:50:13.318816  374821 main.go:141] libmachine: (newest-cni-052502)       <readonly/>
	I0229 02:50:13.318823  374821 main.go:141] libmachine: (newest-cni-052502)     </disk>
	I0229 02:50:13.318831  374821 main.go:141] libmachine: (newest-cni-052502)     <disk type='file' device='disk'>
	I0229 02:50:13.318839  374821 main.go:141] libmachine: (newest-cni-052502)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 02:50:13.318852  374821 main.go:141] libmachine: (newest-cni-052502)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/newest-cni-052502.rawdisk'/>
	I0229 02:50:13.318860  374821 main.go:141] libmachine: (newest-cni-052502)       <target dev='hda' bus='virtio'/>
	I0229 02:50:13.318880  374821 main.go:141] libmachine: (newest-cni-052502)     </disk>
	I0229 02:50:13.318888  374821 main.go:141] libmachine: (newest-cni-052502)     <interface type='network'>
	I0229 02:50:13.318897  374821 main.go:141] libmachine: (newest-cni-052502)       <source network='mk-newest-cni-052502'/>
	I0229 02:50:13.318904  374821 main.go:141] libmachine: (newest-cni-052502)       <model type='virtio'/>
	I0229 02:50:13.318912  374821 main.go:141] libmachine: (newest-cni-052502)     </interface>
	I0229 02:50:13.318919  374821 main.go:141] libmachine: (newest-cni-052502)     <interface type='network'>
	I0229 02:50:13.318929  374821 main.go:141] libmachine: (newest-cni-052502)       <source network='default'/>
	I0229 02:50:13.318936  374821 main.go:141] libmachine: (newest-cni-052502)       <model type='virtio'/>
	I0229 02:50:13.318944  374821 main.go:141] libmachine: (newest-cni-052502)     </interface>
	I0229 02:50:13.318951  374821 main.go:141] libmachine: (newest-cni-052502)     <serial type='pty'>
	I0229 02:50:13.318961  374821 main.go:141] libmachine: (newest-cni-052502)       <target port='0'/>
	I0229 02:50:13.318968  374821 main.go:141] libmachine: (newest-cni-052502)     </serial>
	I0229 02:50:13.318977  374821 main.go:141] libmachine: (newest-cni-052502)     <console type='pty'>
	I0229 02:50:13.318985  374821 main.go:141] libmachine: (newest-cni-052502)       <target type='serial' port='0'/>
	I0229 02:50:13.318993  374821 main.go:141] libmachine: (newest-cni-052502)     </console>
	I0229 02:50:13.318999  374821 main.go:141] libmachine: (newest-cni-052502)     <rng model='virtio'>
	I0229 02:50:13.319009  374821 main.go:141] libmachine: (newest-cni-052502)       <backend model='random'>/dev/random</backend>
	I0229 02:50:13.319015  374821 main.go:141] libmachine: (newest-cni-052502)     </rng>
	I0229 02:50:13.319026  374821 main.go:141] libmachine: (newest-cni-052502)     
	I0229 02:50:13.319032  374821 main.go:141] libmachine: (newest-cni-052502)     
	I0229 02:50:13.319040  374821 main.go:141] libmachine: (newest-cni-052502)   </devices>
	I0229 02:50:13.319046  374821 main.go:141] libmachine: (newest-cni-052502) </domain>
	I0229 02:50:13.319058  374821 main.go:141] libmachine: (newest-cni-052502) 
	I0229 02:50:13.324078  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:18:95:ba in network default
	I0229 02:50:13.324819  374821 main.go:141] libmachine: (newest-cni-052502) Ensuring networks are active...
	I0229 02:50:13.324849  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:13.325621  374821 main.go:141] libmachine: (newest-cni-052502) Ensuring network default is active
	I0229 02:50:13.325979  374821 main.go:141] libmachine: (newest-cni-052502) Ensuring network mk-newest-cni-052502 is active
	I0229 02:50:13.326496  374821 main.go:141] libmachine: (newest-cni-052502) Getting domain xml...
	I0229 02:50:13.327307  374821 main.go:141] libmachine: (newest-cni-052502) Creating domain...
	I0229 02:50:14.598130  374821 main.go:141] libmachine: (newest-cni-052502) Waiting to get IP...
	I0229 02:50:14.598866  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:14.599421  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:14.599488  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:14.599435  374844 retry.go:31] will retry after 215.994494ms: waiting for machine to come up
	I0229 02:50:14.817032  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:14.817566  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:14.817596  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:14.817521  374844 retry.go:31] will retry after 376.066204ms: waiting for machine to come up
	I0229 02:50:15.195070  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:15.195620  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:15.195685  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:15.195571  374844 retry.go:31] will retry after 368.532388ms: waiting for machine to come up
	I0229 02:50:15.566245  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:15.566737  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:15.566760  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:15.566696  374844 retry.go:31] will retry after 443.886219ms: waiting for machine to come up
	I0229 02:50:16.012395  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:16.012863  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:16.012892  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:16.012805  374844 retry.go:31] will retry after 690.20974ms: waiting for machine to come up
	I0229 02:50:16.704458  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:16.704840  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:16.704869  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:16.704800  374844 retry.go:31] will retry after 678.534797ms: waiting for machine to come up
	I0229 02:50:17.384591  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:17.385072  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:17.385111  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:17.385072  374844 retry.go:31] will retry after 1.034211028s: waiting for machine to come up
	I0229 02:50:18.420604  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:18.421111  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:18.421142  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:18.421049  374844 retry.go:31] will retry after 1.07674173s: waiting for machine to come up
	I0229 02:50:19.499142  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:19.499549  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:19.499572  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:19.499515  374844 retry.go:31] will retry after 1.407577159s: waiting for machine to come up
	I0229 02:50:20.908904  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:20.909346  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:20.909371  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:20.909293  374844 retry.go:31] will retry after 1.560987942s: waiting for machine to come up
	I0229 02:50:22.471531  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:22.472048  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:22.472077  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:22.472028  374844 retry.go:31] will retry after 2.683754954s: waiting for machine to come up
	I0229 02:50:25.158729  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:25.159283  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:25.159316  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:25.159203  374844 retry.go:31] will retry after 3.064755607s: waiting for machine to come up
	I0229 02:50:28.226168  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:28.226598  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:28.226627  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:28.226556  374844 retry.go:31] will retry after 2.893942808s: waiting for machine to come up
	I0229 02:50:31.123258  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:31.123692  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:31.123717  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:31.123647  374844 retry.go:31] will retry after 3.539127651s: waiting for machine to come up
	I0229 02:50:34.664083  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.664596  374821 main.go:141] libmachine: (newest-cni-052502) Found IP for machine: 192.168.39.3
	I0229 02:50:34.664624  374821 main.go:141] libmachine: (newest-cni-052502) Reserving static IP address...
	I0229 02:50:34.664639  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has current primary IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.665091  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find host DHCP lease matching {name: "newest-cni-052502", mac: "52:54:00:19:fc:ef", ip: "192.168.39.3"} in network mk-newest-cni-052502
	I0229 02:50:34.743587  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Getting to WaitForSSH function...
	I0229 02:50:34.743625  374821 main.go:141] libmachine: (newest-cni-052502) Reserved static IP address: 192.168.39.3
	I0229 02:50:34.743671  374821 main.go:141] libmachine: (newest-cni-052502) Waiting for SSH to be available...
	I0229 02:50:34.746635  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.747103  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:34.747134  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.747267  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Using SSH client type: external
	I0229 02:50:34.747284  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa (-rw-------)
	I0229 02:50:34.747370  374821 main.go:141] libmachine: (newest-cni-052502) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:50:34.747406  374821 main.go:141] libmachine: (newest-cni-052502) DBG | About to run SSH command:
	I0229 02:50:34.747425  374821 main.go:141] libmachine: (newest-cni-052502) DBG | exit 0
	I0229 02:50:34.874573  374821 main.go:141] libmachine: (newest-cni-052502) DBG | SSH cmd err, output: <nil>: 
	I0229 02:50:34.874787  374821 main.go:141] libmachine: (newest-cni-052502) KVM machine creation complete!
	I0229 02:50:34.875135  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetConfigRaw
	I0229 02:50:34.875786  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:34.875994  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:34.876202  374821 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 02:50:34.876231  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetState
	I0229 02:50:34.877701  374821 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 02:50:34.877715  374821 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 02:50:34.877721  374821 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 02:50:34.877727  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:34.880516  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.880934  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:34.880964  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.881078  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:34.881275  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:34.881461  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:34.881638  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:34.881830  374821 main.go:141] libmachine: Using SSH client type: native
	I0229 02:50:34.882085  374821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:50:34.882103  374821 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 02:50:34.990136  374821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:50:34.990162  374821 main.go:141] libmachine: Detecting the provisioner...
	I0229 02:50:34.990170  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:34.993087  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.993469  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:34.993499  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.993676  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:34.993908  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:34.994125  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:34.994324  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:34.994533  374821 main.go:141] libmachine: Using SSH client type: native
	I0229 02:50:34.994716  374821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:50:34.994728  374821 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 02:50:35.103711  374821 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 02:50:35.103786  374821 main.go:141] libmachine: found compatible host: buildroot
	I0229 02:50:35.103796  374821 main.go:141] libmachine: Provisioning with buildroot...
	I0229 02:50:35.103807  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetMachineName
	I0229 02:50:35.104080  374821 buildroot.go:166] provisioning hostname "newest-cni-052502"
	I0229 02:50:35.104112  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetMachineName
	I0229 02:50:35.104308  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:35.106944  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.107339  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.107387  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.107547  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:35.107753  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.107933  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.108092  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:35.108276  374821 main.go:141] libmachine: Using SSH client type: native
	I0229 02:50:35.108478  374821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:50:35.108491  374821 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-052502 && echo "newest-cni-052502" | sudo tee /etc/hostname
	I0229 02:50:35.231050  374821 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-052502
	
	I0229 02:50:35.231078  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:35.233979  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.234364  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.234408  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.234578  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:35.234761  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.234958  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.235084  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:35.235218  374821 main.go:141] libmachine: Using SSH client type: native
	I0229 02:50:35.235457  374821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:50:35.235485  374821 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-052502' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-052502/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-052502' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:50:35.354365  374821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:50:35.354450  374821 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:50:35.354484  374821 buildroot.go:174] setting up certificates
	I0229 02:50:35.354499  374821 provision.go:83] configureAuth start
	I0229 02:50:35.354516  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetMachineName
	I0229 02:50:35.354826  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetIP
	I0229 02:50:35.357855  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.358305  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.358332  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.358504  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:35.361003  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.361391  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.361434  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.361571  374821 provision.go:138] copyHostCerts
	I0229 02:50:35.361636  374821 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:50:35.361667  374821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:50:35.361778  374821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:50:35.361876  374821 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:50:35.361885  374821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:50:35.361915  374821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:50:35.361965  374821 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:50:35.361972  374821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:50:35.361992  374821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:50:35.362032  374821 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.newest-cni-052502 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube newest-cni-052502]
	I0229 02:50:35.448234  374821 provision.go:172] copyRemoteCerts
	I0229 02:50:35.448294  374821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:50:35.448320  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:35.451286  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.451606  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.451634  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.451848  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:35.452011  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.452183  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:35.452285  374821 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa Username:docker}
	I0229 02:50:35.535100  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:50:35.562514  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 02:50:35.589505  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:50:35.617178  374821 provision.go:86] duration metric: configureAuth took 262.644629ms
	I0229 02:50:35.617208  374821 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:50:35.617427  374821 config.go:182] Loaded profile config "newest-cni-052502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:50:35.617557  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:35.620493  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.620888  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.620918  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.621073  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:35.621298  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.621492  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.621644  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:35.621847  374821 main.go:141] libmachine: Using SSH client type: native
	I0229 02:50:35.622006  374821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:50:35.622019  374821 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:50:35.921401  374821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:50:35.921430  374821 main.go:141] libmachine: Checking connection to Docker...
	I0229 02:50:35.921441  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetURL
	I0229 02:50:35.922795  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Using libvirt version 6000000
	I0229 02:50:35.925270  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.925740  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.925771  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.925975  374821 main.go:141] libmachine: Docker is up and running!
	I0229 02:50:35.925989  374821 main.go:141] libmachine: Reticulating splines...
	I0229 02:50:35.926003  374821 client.go:171] LocalClient.Create took 23.10484651s
	I0229 02:50:35.926025  374821 start.go:167] duration metric: libmachine.API.Create for "newest-cni-052502" took 23.104933145s
	I0229 02:50:35.926037  374821 start.go:300] post-start starting for "newest-cni-052502" (driver="kvm2")
	I0229 02:50:35.926053  374821 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:50:35.926073  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:35.926346  374821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:50:35.926373  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:35.928805  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.929093  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.929131  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.929265  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:35.929446  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.929620  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:35.929746  374821 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa Username:docker}
	I0229 02:50:36.015843  374821 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:50:36.021514  374821 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:50:36.021544  374821 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:50:36.021626  374821 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:50:36.021721  374821 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:50:36.021845  374821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:50:36.033720  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:50:36.060682  374821 start.go:303] post-start completed in 134.629361ms
	I0229 02:50:36.060745  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetConfigRaw
	I0229 02:50:36.061481  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetIP
	I0229 02:50:36.064372  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.064760  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:36.064786  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.064994  374821 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/config.json ...
	I0229 02:50:36.065225  374821 start.go:128] duration metric: createHost completed in 23.262235057s
	I0229 02:50:36.065254  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:36.067542  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.067865  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:36.067895  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.068030  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:36.068242  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:36.068443  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:36.068610  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:36.068812  374821 main.go:141] libmachine: Using SSH client type: native
	I0229 02:50:36.068970  374821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:50:36.068983  374821 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:50:36.172256  374821 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709175036.151249194
	
	I0229 02:50:36.172282  374821 fix.go:206] guest clock: 1709175036.151249194
	I0229 02:50:36.172292  374821 fix.go:219] Guest: 2024-02-29 02:50:36.151249194 +0000 UTC Remote: 2024-02-29 02:50:36.065240506 +0000 UTC m=+23.389219830 (delta=86.008688ms)
	I0229 02:50:36.172329  374821 fix.go:190] guest clock delta is within tolerance: 86.008688ms
	I0229 02:50:36.172337  374821 start.go:83] releasing machines lock for "newest-cni-052502", held for 23.369440418s
	I0229 02:50:36.172375  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:36.172697  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetIP
	I0229 02:50:36.175448  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.175853  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:36.175893  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.176101  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:36.176653  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:36.176845  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:36.176946  374821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:50:36.176999  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:36.177075  374821 ssh_runner.go:195] Run: cat /version.json
	I0229 02:50:36.177101  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:36.179667  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.179899  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.180055  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:36.180076  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.180321  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:36.180331  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:36.180348  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.180502  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:36.180582  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:36.180689  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:36.180701  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:36.180790  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:36.180854  374821 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa Username:docker}
	I0229 02:50:36.180980  374821 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa Username:docker}
	I0229 02:50:36.259768  374821 ssh_runner.go:195] Run: systemctl --version
	I0229 02:50:36.286936  374821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:50:36.458813  374821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:50:36.468277  374821 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:50:36.468461  374821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:50:36.488655  374821 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:50:36.488677  374821 start.go:475] detecting cgroup driver to use...
	I0229 02:50:36.488733  374821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:50:36.508407  374821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:50:36.523751  374821 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:50:36.523802  374821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:50:36.538616  374821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:50:36.553605  374821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:50:36.680629  374821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:50:36.839915  374821 docker.go:233] disabling docker service ...
	I0229 02:50:36.840012  374821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:50:36.857071  374821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:50:36.873581  374821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:50:37.029715  374821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:50:37.165022  374821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:50:37.182206  374821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:50:37.204760  374821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:50:37.204818  374821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:50:37.216139  374821 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:50:37.216196  374821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:50:37.227926  374821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:50:37.239374  374821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:50:37.251162  374821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:50:37.265218  374821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:50:37.276719  374821 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:50:37.276777  374821 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:50:37.292624  374821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:50:37.304108  374821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:50:37.454946  374821 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:50:37.628940  374821 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:50:37.629029  374821 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:50:37.635522  374821 start.go:543] Will wait 60s for crictl version
	I0229 02:50:37.635581  374821 ssh_runner.go:195] Run: which crictl
	I0229 02:50:37.639943  374821 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:50:37.686192  374821 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:50:37.686305  374821 ssh_runner.go:195] Run: crio --version
	I0229 02:50:37.718681  374821 ssh_runner.go:195] Run: crio --version
	I0229 02:50:37.750993  374821 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 02:50:37.752560  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetIP
	I0229 02:50:37.755351  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:37.755728  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:37.755759  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:37.755981  374821 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:50:37.760789  374821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:50:37.776559  374821 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0229 02:50:37.777821  374821 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:50:37.777885  374821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:50:37.824282  374821 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 02:50:37.824391  374821 ssh_runner.go:195] Run: which lz4
	I0229 02:50:37.828981  374821 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:50:37.833913  374821 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:50:37.833942  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0229 02:50:39.508553  374821 crio.go:444] Took 1.679606 seconds to copy over tarball
	I0229 02:50:39.508632  374821 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:50:42.103341  374821 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.594680261s)
	I0229 02:50:42.103378  374821 crio.go:451] Took 2.594798 seconds to extract the tarball
	I0229 02:50:42.103400  374821 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:50:42.143748  374821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:50:42.195268  374821 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:50:42.195294  374821 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:50:42.195385  374821 ssh_runner.go:195] Run: crio config
	I0229 02:50:42.244619  374821 cni.go:84] Creating CNI manager for ""
	I0229 02:50:42.244647  374821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:50:42.244680  374821 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0229 02:50:42.244705  374821 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-052502 NodeName:newest-cni-052502 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:m
ap[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:50:42.244865  374821 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-052502"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:50:42.244969  374821 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-052502 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-052502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:50:42.245029  374821 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 02:50:42.257131  374821 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:50:42.257228  374821 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:50:42.268835  374821 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (417 bytes)
	I0229 02:50:42.287405  374821 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 02:50:42.306094  374821 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I0229 02:50:42.324756  374821 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0229 02:50:42.329291  374821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:50:42.343936  374821 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502 for IP: 192.168.39.3
	I0229 02:50:42.343971  374821 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:42.344140  374821 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:50:42.344185  374821 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:50:42.344228  374821 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/client.key
	I0229 02:50:42.344242  374821 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/client.crt with IP's: []
	I0229 02:50:42.601465  374821 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/client.crt ...
	I0229 02:50:42.601496  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/client.crt: {Name:mkb10a9b350cb6b477d3d1773e938be9b48f7e3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:42.601699  374821 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/client.key ...
	I0229 02:50:42.601718  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/client.key: {Name:mk3e2f519fc0812b0283e9892363152739bfbc85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:42.601838  374821 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key.599d509e
	I0229 02:50:42.601861  374821 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.crt.599d509e with IP's: [192.168.39.3 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:50:42.823809  374821 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.crt.599d509e ...
	I0229 02:50:42.823844  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.crt.599d509e: {Name:mkd5e9be92674d8ea2ea382e6a4a491444219f11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:42.824043  374821 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key.599d509e ...
	I0229 02:50:42.824065  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key.599d509e: {Name:mkcca81814e862bc1f0c2c52937f10fd2433a80c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:42.824169  374821 certs.go:337] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.crt.599d509e -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.crt
	I0229 02:50:42.824271  374821 certs.go:341] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key.599d509e -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key
	I0229 02:50:42.824338  374821 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.key
	I0229 02:50:42.824354  374821 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.crt with IP's: []
	I0229 02:50:43.135550  374821 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.crt ...
	I0229 02:50:43.135592  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.crt: {Name:mk624109b6a026719ae142b1084c83c24f8b99cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:43.135793  374821 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.key ...
	I0229 02:50:43.135813  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.key: {Name:mkfe74020102b9bd2320bc39d9b1f38c0d6d358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:43.136024  374821 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:50:43.136081  374821 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:50:43.136096  374821 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:50:43.136142  374821 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:50:43.136176  374821 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:50:43.136207  374821 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:50:43.136274  374821 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:50:43.137156  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:50:43.169150  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:50:43.198636  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:50:43.227778  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:50:43.257237  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:50:43.284632  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:50:43.315651  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:50:43.344242  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:50:43.374619  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:50:43.404734  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:50:43.433574  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:50:43.463696  374821 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:50:43.483424  374821 ssh_runner.go:195] Run: openssl version
	I0229 02:50:43.490700  374821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:50:43.505018  374821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:50:43.510881  374821 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:50:43.510960  374821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:50:43.517601  374821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:50:43.531800  374821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:50:43.545585  374821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:50:43.551076  374821 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:50:43.551142  374821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:50:43.557446  374821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:50:43.571494  374821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:50:43.586705  374821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:50:43.592762  374821 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:50:43.592832  374821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:50:43.599715  374821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:50:43.614375  374821 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:50:43.619550  374821 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:50:43.619617  374821 kubeadm.go:404] StartCluster: {Name:newest-cni-052502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-052502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:50:43.619741  374821 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:50:43.619802  374821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:50:43.671528  374821 cri.go:89] found id: ""
	I0229 02:50:43.671600  374821 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:50:43.683402  374821 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:50:43.695131  374821 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:50:43.706837  374821 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:50:43.706886  374821 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:50:43.821968  374821 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0229 02:50:43.822208  374821 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:50:44.074143  374821 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:50:44.074267  374821 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:50:44.074392  374821 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:50:44.340767  374821 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:50:44.433518  374821 out.go:204]   - Generating certificates and keys ...
	I0229 02:50:44.433653  374821 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:50:44.433752  374821 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:50:44.675166  374821 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:50:45.102325  374821 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:50:45.204610  374821 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 02:50:45.404744  374821 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 02:50:45.581501  374821 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 02:50:45.581878  374821 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-052502] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0229 02:50:45.903152  374821 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 02:50:45.903351  374821 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-052502] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0229 02:50:46.048573  374821 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:50:46.125364  374821 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:50:46.271627  374821 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 02:50:46.271799  374821 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:50:46.393572  374821 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:50:46.671199  374821 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0229 02:50:46.843556  374821 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:50:47.035187  374821 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:50:47.188068  374821 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:50:47.188632  374821 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:50:47.193027  374821 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:50:47.194675  374821 out.go:204]   - Booting up control plane ...
	I0229 02:50:47.194778  374821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:50:47.195317  374821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:50:47.196229  374821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:50:47.220819  374821 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:50:47.221809  374821 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:50:47.222004  374821 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:50:47.369164  374821 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	
	
	==> CRI-O <==
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.703589998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175051703566853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=105f3517-3887-4a69-9eda-e275a0593029 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.704337561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0cc7e0f0-72dc-4565-9f78-9bb9287ccb27 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.704412365Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0cc7e0f0-72dc-4565-9f78-9bb9287ccb27 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.704682163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173898403089298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99cc0a7d8158b241a6363055d477e4538b67123266993ebc4dc7e6a9ab810e19,PodSandboxId:5dcd7325799d3ac205f9c49b90f57add472abce7a9a6ec7400e34d91b3c653e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173878169797352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22d0d5e3-3658-4122-adf1-8faffa8de817,},Annotations:map[string]string{io.kubernetes.container.hash: abba158a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab,PodSandboxId:9a3a5a98dfc7d11e1d336ca7115304ec25c8a76c37f7598640afbcfc07f9c1af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709173875201146415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2z5w8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39b5eb65-690b-488b-9bec-7cfabcc27829,},Annotations:map[string]string{io.kubernetes.container.hash: ab6762a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e,PodSandboxId:65bbb4d7efe5afd8099ff4ef00114c6ef456d6b4d5363221972138b81cfc0bc3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709173867581561724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7849f368-0bca-4c2b-ae
72-cbacef9bbb72,},Annotations:map[string]string{io.kubernetes.container.hash: b3086f3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173867580132672,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7
d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772,PodSandboxId:484ad89cf88e2554b842362cc426f924ff51d30e7f14bff7a23c0a1fd37b4661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709173862820309191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0fd2b2d3a34444351a58f9cc442592,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0,PodSandboxId:9d7b51641b06fd63936e813ccc91714206dffe2ae20ba89cee69829718659b22,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709173862822165374,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bddedf1d587af5333bf6d061dbebe3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8122a282,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75,PodSandboxId:2c3c97570a989ff886d9fdd97254fdcf1e146d45c438de3302180cae14b318f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709173862789331849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc56f6a18092022bffc9b777210b75f,},Annotations:map[string]string{io.kubernetes.container.hash: 3709
cc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5,PodSandboxId:efab7c788859d9985732ca2f1a43fdd5e21c04f334f127d75320136fc31028de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709173862790794588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4104973fb9e5b903cb363d606f23991,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0cc7e0f0-72dc-4565-9f78-9bb9287ccb27 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.767245154Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b452bded-9083-4a23-ab13-5c666a3070f8 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.767360682Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b452bded-9083-4a23-ab13-5c666a3070f8 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.769610820Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5e43669-e41d-4f3b-928f-5c16d46b2311 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.770109732Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175051770075486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5e43669-e41d-4f3b-928f-5c16d46b2311 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.771170824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94422ae3-b85e-41ac-a8b9-b007de51523b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.771226278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94422ae3-b85e-41ac-a8b9-b007de51523b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.771457773Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173898403089298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99cc0a7d8158b241a6363055d477e4538b67123266993ebc4dc7e6a9ab810e19,PodSandboxId:5dcd7325799d3ac205f9c49b90f57add472abce7a9a6ec7400e34d91b3c653e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173878169797352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22d0d5e3-3658-4122-adf1-8faffa8de817,},Annotations:map[string]string{io.kubernetes.container.hash: abba158a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab,PodSandboxId:9a3a5a98dfc7d11e1d336ca7115304ec25c8a76c37f7598640afbcfc07f9c1af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709173875201146415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2z5w8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39b5eb65-690b-488b-9bec-7cfabcc27829,},Annotations:map[string]string{io.kubernetes.container.hash: ab6762a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e,PodSandboxId:65bbb4d7efe5afd8099ff4ef00114c6ef456d6b4d5363221972138b81cfc0bc3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709173867581561724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7849f368-0bca-4c2b-ae
72-cbacef9bbb72,},Annotations:map[string]string{io.kubernetes.container.hash: b3086f3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173867580132672,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7
d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772,PodSandboxId:484ad89cf88e2554b842362cc426f924ff51d30e7f14bff7a23c0a1fd37b4661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709173862820309191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0fd2b2d3a34444351a58f9cc442592,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0,PodSandboxId:9d7b51641b06fd63936e813ccc91714206dffe2ae20ba89cee69829718659b22,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709173862822165374,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bddedf1d587af5333bf6d061dbebe3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8122a282,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75,PodSandboxId:2c3c97570a989ff886d9fdd97254fdcf1e146d45c438de3302180cae14b318f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709173862789331849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc56f6a18092022bffc9b777210b75f,},Annotations:map[string]string{io.kubernetes.container.hash: 3709
cc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5,PodSandboxId:efab7c788859d9985732ca2f1a43fdd5e21c04f334f127d75320136fc31028de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709173862790794588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4104973fb9e5b903cb363d606f23991,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94422ae3-b85e-41ac-a8b9-b007de51523b name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.823919760Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a9cea05-2820-408a-8757-bb58a7aa487b name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.824068428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a9cea05-2820-408a-8757-bb58a7aa487b name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.826826031Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8674190e-bd42-4b98-8a4f-080059cbd8c1 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.827425470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175051827388022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8674190e-bd42-4b98-8a4f-080059cbd8c1 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.828236926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=871bc972-f730-4bb3-b0ac-82bf90774568 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.828310925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=871bc972-f730-4bb3-b0ac-82bf90774568 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.828573724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173898403089298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99cc0a7d8158b241a6363055d477e4538b67123266993ebc4dc7e6a9ab810e19,PodSandboxId:5dcd7325799d3ac205f9c49b90f57add472abce7a9a6ec7400e34d91b3c653e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173878169797352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22d0d5e3-3658-4122-adf1-8faffa8de817,},Annotations:map[string]string{io.kubernetes.container.hash: abba158a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab,PodSandboxId:9a3a5a98dfc7d11e1d336ca7115304ec25c8a76c37f7598640afbcfc07f9c1af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709173875201146415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2z5w8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39b5eb65-690b-488b-9bec-7cfabcc27829,},Annotations:map[string]string{io.kubernetes.container.hash: ab6762a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e,PodSandboxId:65bbb4d7efe5afd8099ff4ef00114c6ef456d6b4d5363221972138b81cfc0bc3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709173867581561724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7849f368-0bca-4c2b-ae
72-cbacef9bbb72,},Annotations:map[string]string{io.kubernetes.container.hash: b3086f3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173867580132672,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7
d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772,PodSandboxId:484ad89cf88e2554b842362cc426f924ff51d30e7f14bff7a23c0a1fd37b4661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709173862820309191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0fd2b2d3a34444351a58f9cc442592,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0,PodSandboxId:9d7b51641b06fd63936e813ccc91714206dffe2ae20ba89cee69829718659b22,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709173862822165374,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bddedf1d587af5333bf6d061dbebe3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8122a282,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75,PodSandboxId:2c3c97570a989ff886d9fdd97254fdcf1e146d45c438de3302180cae14b318f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709173862789331849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc56f6a18092022bffc9b777210b75f,},Annotations:map[string]string{io.kubernetes.container.hash: 3709
cc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5,PodSandboxId:efab7c788859d9985732ca2f1a43fdd5e21c04f334f127d75320136fc31028de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709173862790794588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4104973fb9e5b903cb363d606f23991,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=871bc972-f730-4bb3-b0ac-82bf90774568 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.876918857Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=516f8b1f-c021-43c3-bfb6-bed54cf59633 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.877097780Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=516f8b1f-c021-43c3-bfb6-bed54cf59633 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.879469927Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=664cbe9c-4559-43bc-b708-62acb9d6d126 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.880024382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175051879916661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=664cbe9c-4559-43bc-b708-62acb9d6d126 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.881191478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8dfc4ad-b89d-4623-83de-4d696c40219c name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.881304964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8dfc4ad-b89d-4623-83de-4d696c40219c name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:51 no-preload-247751 crio[670]: time="2024-02-29 02:50:51.881924328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173898403089298,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99cc0a7d8158b241a6363055d477e4538b67123266993ebc4dc7e6a9ab810e19,PodSandboxId:5dcd7325799d3ac205f9c49b90f57add472abce7a9a6ec7400e34d91b3c653e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173878169797352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22d0d5e3-3658-4122-adf1-8faffa8de817,},Annotations:map[string]string{io.kubernetes.container.hash: abba158a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab,PodSandboxId:9a3a5a98dfc7d11e1d336ca7115304ec25c8a76c37f7598640afbcfc07f9c1af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1709173875201146415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2z5w8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39b5eb65-690b-488b-9bec-7cfabcc27829,},Annotations:map[string]string{io.kubernetes.container.hash: ab6762a3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e,PodSandboxId:65bbb4d7efe5afd8099ff4ef00114c6ef456d6b4d5363221972138b81cfc0bc3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1709173867581561724,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cdc4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7849f368-0bca-4c2b-ae
72-cbacef9bbb72,},Annotations:map[string]string{io.kubernetes.container.hash: b3086f3a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a,PodSandboxId:e77531cab13a8b614cfe9e44cb6cfea8dcde0506ea43e27a5b158fb63a2978ec,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173867580132672,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ba0e6f-835a-42d6-a7a9-bfafedf7a7
d8,},Annotations:map[string]string{io.kubernetes.container.hash: 49c5aae5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772,PodSandboxId:484ad89cf88e2554b842362cc426f924ff51d30e7f14bff7a23c0a1fd37b4661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1709173862820309191,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a0fd2b2d3a34444351a58f9cc442592,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0,PodSandboxId:9d7b51641b06fd63936e813ccc91714206dffe2ae20ba89cee69829718659b22,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1709173862822165374,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bddedf1d587af5333bf6d061dbebe3a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 8122a282,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75,PodSandboxId:2c3c97570a989ff886d9fdd97254fdcf1e146d45c438de3302180cae14b318f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1709173862789331849,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dc56f6a18092022bffc9b777210b75f,},Annotations:map[string]string{io.kubernetes.container.hash: 3709
cc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5,PodSandboxId:efab7c788859d9985732ca2f1a43fdd5e21c04f334f127d75320136fc31028de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1709173862790794588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-247751,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4104973fb9e5b903cb363d606f23991,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8dfc4ad-b89d-4623-83de-4d696c40219c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1d3ea01e4d000       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       3                   e77531cab13a8       storage-provisioner
	99cc0a7d8158b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   5dcd7325799d3       busybox
	869cb90ce44f1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   9a3a5a98dfc7d       coredns-76f75df574-2z5w8
	1061c7e86aceb       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      19 minutes ago      Running             kube-proxy                1                   65bbb4d7efe5a       kube-proxy-cdc4l
	3c88c68c0c40f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       2                   e77531cab13a8       storage-provisioner
	92977e2b17423       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      19 minutes ago      Running             etcd                      1                   9d7b51641b06f       etcd-no-preload-247751
	d2cd6c6c49c57       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      19 minutes ago      Running             kube-scheduler            1                   484ad89cf88e2       kube-scheduler-no-preload-247751
	5520037685c0c       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      19 minutes ago      Running             kube-controller-manager   1                   efab7c788859d       kube-controller-manager-no-preload-247751
	60cc548bfcd72       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      19 minutes ago      Running             kube-apiserver            1                   2c3c97570a989       kube-apiserver-no-preload-247751
	
	
	==> coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45956 - 20729 "HINFO IN 3196636296519869444.5891557949309614254. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.069741927s
	
	
	==> describe nodes <==
	Name:               no-preload-247751
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-247751
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=no-preload-247751
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_21_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:21:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-247751
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:50:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:46:56 +0000   Thu, 29 Feb 2024 02:21:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:46:56 +0000   Thu, 29 Feb 2024 02:21:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:46:56 +0000   Thu, 29 Feb 2024 02:21:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:46:56 +0000   Thu, 29 Feb 2024 02:31:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.114
	  Hostname:    no-preload-247751
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 881168a5061a46d9ae56f8a52fa75d96
	  System UUID:                881168a5-061a-46d9-ae56-f8a52fa75d96
	  Boot ID:                    2707afbd-f3e4-443c-abf7-896de325fc97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-76f75df574-2z5w8                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-247751                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-247751             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-247751    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-cdc4l                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-247751             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-zghwq              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-247751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-247751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-247751 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-247751 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-247751 event: Registered Node no-preload-247751 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-247751 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-247751 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-247751 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-247751 event: Registered Node no-preload-247751 in Controller
	
	
	==> dmesg <==
	[Feb29 02:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052491] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042707] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.519441] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.403110] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.710311] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.595404] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.056156] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059678] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.214871] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.138888] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.258363] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[ +21.994355] kauditd_printk_skb: 130 callbacks suppressed
	[Feb29 02:31] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +5.739601] kauditd_printk_skb: 63 callbacks suppressed
	[  +5.730898] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.047717] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] <==
	{"level":"info","ts":"2024-02-29T02:31:03.755428Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d80e54998a205cf3","initial-advertise-peer-urls":["https://192.168.72.114:2380"],"listen-peer-urls":["https://192.168.72.114:2380"],"advertise-client-urls":["https://192.168.72.114:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.114:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T02:31:03.75555Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T02:31:04.778834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d80e54998a205cf3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T02:31:04.77892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d80e54998a205cf3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:31:04.779038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d80e54998a205cf3 received MsgPreVoteResp from d80e54998a205cf3 at term 2"}
	{"level":"info","ts":"2024-02-29T02:31:04.779057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d80e54998a205cf3 became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:04.779102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d80e54998a205cf3 received MsgVoteResp from d80e54998a205cf3 at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:04.77912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d80e54998a205cf3 became leader at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:04.779132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d80e54998a205cf3 elected leader d80e54998a205cf3 at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:04.784816Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:31:04.784746Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d80e54998a205cf3","local-member-attributes":"{Name:no-preload-247751 ClientURLs:[https://192.168.72.114:2379]}","request-path":"/0/members/d80e54998a205cf3/attributes","cluster-id":"fe5d4cbbe2066f7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:31:04.78585Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:31:04.78649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:31:04.78651Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:31:04.78965Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.114:2379"}
	{"level":"info","ts":"2024-02-29T02:31:04.791825Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-02-29T02:31:23.213571Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.252099ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-247751\" ","response":"range_response_count:1 size:4441"}
	{"level":"info","ts":"2024-02-29T02:31:23.213752Z","caller":"traceutil/trace.go:171","msg":"trace[2039286566] range","detail":"{range_begin:/registry/minions/no-preload-247751; range_end:; response_count:1; response_revision:582; }","duration":"177.451224ms","start":"2024-02-29T02:31:23.036282Z","end":"2024-02-29T02:31:23.213733Z","steps":["trace[2039286566] 'range keys from in-memory index tree'  (duration: 177.069838ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T02:41:04.842178Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":825}
	{"level":"info","ts":"2024-02-29T02:41:04.845205Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":825,"took":"1.983721ms","hash":939881084}
	{"level":"info","ts":"2024-02-29T02:41:04.845283Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":939881084,"revision":825,"compact-revision":-1}
	{"level":"info","ts":"2024-02-29T02:46:04.850362Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1067}
	{"level":"info","ts":"2024-02-29T02:46:04.852638Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1067,"took":"1.257543ms","hash":3965321560}
	{"level":"info","ts":"2024-02-29T02:46:04.852793Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3965321560,"revision":1067,"compact-revision":825}
	{"level":"info","ts":"2024-02-29T02:50:44.510706Z","caller":"traceutil/trace.go:171","msg":"trace[864719239] transaction","detail":"{read_only:false; response_revision:1537; number_of_response:1; }","duration":"296.37879ms","start":"2024-02-29T02:50:44.214278Z","end":"2024-02-29T02:50:44.510657Z","steps":["trace[864719239] 'process raft request'  (duration: 296.187969ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:50:52 up 20 min,  0 users,  load average: 0.17, 0.19, 0.13
	Linux no-preload-247751 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] <==
	I0229 02:44:07.244175       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:46:06.245234       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:46:06.245389       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0229 02:46:07.246025       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:46:07.246093       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:46:07.246103       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:46:07.246187       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:46:07.246242       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:46:07.247240       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:47:07.246723       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:47:07.246864       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:47:07.246894       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:47:07.248393       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:47:07.248573       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:47:07.248606       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:49:07.247851       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:49:07.248261       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:49:07.248297       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:49:07.249127       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:49:07.249209       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:49:07.250157       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] <==
	I0229 02:45:19.412553       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:45:48.958839       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:45:49.424100       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:46:18.965847       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:46:19.432547       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:46:48.972539       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:46:49.441227       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 02:47:14.161563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="243.129µs"
	E0229 02:47:18.978435       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:47:19.448889       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 02:47:27.158469       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="191.849µs"
	E0229 02:47:48.987202       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:47:49.457217       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:48:18.993081       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:48:19.465443       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:48:48.999330       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:48:49.475494       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:49:19.004788       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:49:19.485761       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:49:49.011312       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:49:49.495658       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:50:19.018066       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:50:19.505603       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:50:49.024465       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:50:49.516232       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] <==
	I0229 02:31:07.824255       1 server_others.go:72] "Using iptables proxy"
	I0229 02:31:07.849544       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.114"]
	I0229 02:31:07.928723       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0229 02:31:07.928768       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:31:07.928781       1 server_others.go:168] "Using iptables Proxier"
	I0229 02:31:07.932029       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:31:07.933383       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0229 02:31:07.933424       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:31:07.936278       1 config.go:188] "Starting service config controller"
	I0229 02:31:07.936357       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:31:07.936387       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:31:07.936394       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:31:07.936903       1 config.go:315] "Starting node config controller"
	I0229 02:31:07.937054       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:31:08.037108       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:31:08.037176       1 shared_informer.go:318] Caches are synced for node config
	I0229 02:31:08.037282       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] <==
	I0229 02:31:04.117003       1 serving.go:380] Generated self-signed cert in-memory
	W0229 02:31:06.107036       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 02:31:06.107193       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 02:31:06.107332       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 02:31:06.107497       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 02:31:06.244356       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0229 02:31:06.244452       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:31:06.252759       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 02:31:06.252898       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 02:31:06.252988       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 02:31:06.255699       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:31:06.356753       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 02:48:02 no-preload-247751 kubelet[1288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:48:02 no-preload-247751 kubelet[1288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:48:16 no-preload-247751 kubelet[1288]: E0229 02:48:16.141817    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:48:29 no-preload-247751 kubelet[1288]: E0229 02:48:29.142000    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:48:44 no-preload-247751 kubelet[1288]: E0229 02:48:44.143908    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:48:55 no-preload-247751 kubelet[1288]: E0229 02:48:55.141999    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:49:02 no-preload-247751 kubelet[1288]: E0229 02:49:02.186368    1288 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:49:02 no-preload-247751 kubelet[1288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:49:02 no-preload-247751 kubelet[1288]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:49:02 no-preload-247751 kubelet[1288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:49:02 no-preload-247751 kubelet[1288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:49:09 no-preload-247751 kubelet[1288]: E0229 02:49:09.142508    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:49:23 no-preload-247751 kubelet[1288]: E0229 02:49:23.142804    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:49:38 no-preload-247751 kubelet[1288]: E0229 02:49:38.142541    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:49:52 no-preload-247751 kubelet[1288]: E0229 02:49:52.141414    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:50:02 no-preload-247751 kubelet[1288]: E0229 02:50:02.195143    1288 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:50:02 no-preload-247751 kubelet[1288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:50:02 no-preload-247751 kubelet[1288]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:50:02 no-preload-247751 kubelet[1288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:50:02 no-preload-247751 kubelet[1288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:50:04 no-preload-247751 kubelet[1288]: E0229 02:50:04.141737    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:50:15 no-preload-247751 kubelet[1288]: E0229 02:50:15.141892    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:50:28 no-preload-247751 kubelet[1288]: E0229 02:50:28.141719    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:50:39 no-preload-247751 kubelet[1288]: E0229 02:50:39.142238    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	Feb 29 02:50:52 no-preload-247751 kubelet[1288]: E0229 02:50:52.143176    1288 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-zghwq" podUID="97018e51-c009-4e33-964b-9e9e4798a48a"
	
	
	==> storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] <==
	I0229 02:31:38.551746       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 02:31:38.567731       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 02:31:38.567872       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 02:31:55.974876       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 02:31:55.975155       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-247751_6a4373fb-9d8c-4ec5-9faf-b5aba65567de!
	I0229 02:31:55.976503       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad204bd3-d8b1-463b-b094-3972bea49d44", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-247751_6a4373fb-9d8c-4ec5-9faf-b5aba65567de became leader
	I0229 02:31:56.075646       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-247751_6a4373fb-9d8c-4ec5-9faf-b5aba65567de!
	
	
	==> storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] <==
	I0229 02:31:07.745594       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0229 02:31:37.748608       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-247751 -n no-preload-247751
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-247751 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-zghwq
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-247751 describe pod metrics-server-57f55c9bc5-zghwq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-247751 describe pod metrics-server-57f55c9bc5-zghwq: exit status 1 (73.806658ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-zghwq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-247751 describe pod metrics-server-57f55c9bc5-zghwq: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (373.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (336.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-915633 -n embed-certs-915633
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-02-29 02:51:06.933197131 +0000 UTC m=+6024.626344050
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-915633 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-915633 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.775µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-915633 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-915633 -n embed-certs-915633
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-915633 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-915633 logs -n 25: (1.460761648s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo find                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo crio                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-117441                                       | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	| delete  | -p                                                     | disable-driver-mounts-542968 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | disable-driver-mounts-542968                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:23 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-915633            | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247751             | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071485  | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275488        | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-915633                 | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247751                  | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071485       | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:40 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275488             | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:50 UTC | 29 Feb 24 02:50 UTC |
	| start   | -p newest-cni-052502 --memory=2200 --alsologtostderr   | newest-cni-052502            | jenkins | v1.32.0 | 29 Feb 24 02:50 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:50 UTC | 29 Feb 24 02:50 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:50:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:50:12.727717  374821 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:50:12.727853  374821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:50:12.727862  374821 out.go:304] Setting ErrFile to fd 2...
	I0229 02:50:12.727866  374821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:50:12.728168  374821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:50:12.728877  374821 out.go:298] Setting JSON to false
	I0229 02:50:12.730123  374821 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9156,"bootTime":1709165857,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:50:12.730192  374821 start.go:139] virtualization: kvm guest
	I0229 02:50:12.732558  374821 out.go:177] * [newest-cni-052502] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:50:12.734156  374821 notify.go:220] Checking for updates...
	I0229 02:50:12.734284  374821 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:50:12.735665  374821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:50:12.736995  374821 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:50:12.738137  374821 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:50:12.739318  374821 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:50:12.740496  374821 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:50:12.742025  374821 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:50:12.742116  374821 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:50:12.742205  374821 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:50:12.742379  374821 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:50:12.780367  374821 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 02:50:12.781749  374821 start.go:299] selected driver: kvm2
	I0229 02:50:12.781767  374821 start.go:903] validating driver "kvm2" against <nil>
	I0229 02:50:12.781779  374821 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:50:12.782707  374821 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:50:12.782790  374821 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:50:12.798844  374821 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:50:12.798890  374821 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W0229 02:50:12.798932  374821 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0229 02:50:12.799164  374821 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0229 02:50:12.799237  374821 cni.go:84] Creating CNI manager for ""
	I0229 02:50:12.799250  374821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:50:12.799260  374821 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 02:50:12.799269  374821 start_flags.go:323] config:
	{Name:newest-cni-052502 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-052502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:50:12.799397  374821 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:50:12.801179  374821 out.go:177] * Starting control plane node newest-cni-052502 in cluster newest-cni-052502
	I0229 02:50:12.802306  374821 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:50:12.802357  374821 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0229 02:50:12.802370  374821 cache.go:56] Caching tarball of preloaded images
	I0229 02:50:12.802472  374821 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 02:50:12.802487  374821 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0229 02:50:12.802621  374821 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/config.json ...
	I0229 02:50:12.802652  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/config.json: {Name:mk79971e208d4ada52b1d140a2faac7d49ee77fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:12.802837  374821 start.go:365] acquiring machines lock for newest-cni-052502: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:50:12.802885  374821 start.go:369] acquired machines lock for "newest-cni-052502" in 26.531µs
	I0229 02:50:12.802909  374821 start.go:93] Provisioning new machine with config: &{Name:newest-cni-052502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-052502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:50:12.802977  374821 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 02:50:12.804437  374821 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 02:50:12.804600  374821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:50:12.804646  374821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:50:12.819091  374821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42711
	I0229 02:50:12.819564  374821 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:50:12.820137  374821 main.go:141] libmachine: Using API Version  1
	I0229 02:50:12.820159  374821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:50:12.820539  374821 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:50:12.820756  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetMachineName
	I0229 02:50:12.820911  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:12.821094  374821 start.go:159] libmachine.API.Create for "newest-cni-052502" (driver="kvm2")
	I0229 02:50:12.821138  374821 client.go:168] LocalClient.Create starting
	I0229 02:50:12.821196  374821 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem
	I0229 02:50:12.821236  374821 main.go:141] libmachine: Decoding PEM data...
	I0229 02:50:12.821253  374821 main.go:141] libmachine: Parsing certificate...
	I0229 02:50:12.821307  374821 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem
	I0229 02:50:12.821326  374821 main.go:141] libmachine: Decoding PEM data...
	I0229 02:50:12.821336  374821 main.go:141] libmachine: Parsing certificate...
	I0229 02:50:12.821352  374821 main.go:141] libmachine: Running pre-create checks...
	I0229 02:50:12.821359  374821 main.go:141] libmachine: (newest-cni-052502) Calling .PreCreateCheck
	I0229 02:50:12.821754  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetConfigRaw
	I0229 02:50:12.822307  374821 main.go:141] libmachine: Creating machine...
	I0229 02:50:12.822324  374821 main.go:141] libmachine: (newest-cni-052502) Calling .Create
	I0229 02:50:12.822496  374821 main.go:141] libmachine: (newest-cni-052502) Creating KVM machine...
	I0229 02:50:12.823758  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found existing default KVM network
	I0229 02:50:12.825880  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:12.825699  374844 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0e0}
	I0229 02:50:12.831015  374821 main.go:141] libmachine: (newest-cni-052502) DBG | trying to create private KVM network mk-newest-cni-052502 192.168.39.0/24...
	I0229 02:50:12.906201  374821 main.go:141] libmachine: (newest-cni-052502) DBG | private KVM network mk-newest-cni-052502 192.168.39.0/24 created
	I0229 02:50:12.906269  374821 main.go:141] libmachine: (newest-cni-052502) Setting up store path in /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502 ...
	I0229 02:50:12.906289  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:12.906170  374844 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:50:12.906308  374821 main.go:141] libmachine: (newest-cni-052502) Building disk image from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 02:50:12.906490  374821 main.go:141] libmachine: (newest-cni-052502) Downloading /home/jenkins/minikube-integration/18063-316644/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 02:50:13.169595  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:13.169451  374844 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa...
	I0229 02:50:13.316334  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:13.316217  374844 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/newest-cni-052502.rawdisk...
	I0229 02:50:13.316385  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Writing magic tar header
	I0229 02:50:13.316407  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Writing SSH key tar header
	I0229 02:50:13.316509  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:13.316428  374844 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502 ...
	I0229 02:50:13.316567  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502
	I0229 02:50:13.316595  374821 main.go:141] libmachine: (newest-cni-052502) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502 (perms=drwx------)
	I0229 02:50:13.316606  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube/machines
	I0229 02:50:13.316621  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:50:13.316630  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-316644
	I0229 02:50:13.316640  374821 main.go:141] libmachine: (newest-cni-052502) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube/machines (perms=drwxr-xr-x)
	I0229 02:50:13.316655  374821 main.go:141] libmachine: (newest-cni-052502) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644/.minikube (perms=drwxr-xr-x)
	I0229 02:50:13.316668  374821 main.go:141] libmachine: (newest-cni-052502) Setting executable bit set on /home/jenkins/minikube-integration/18063-316644 (perms=drwxrwxr-x)
	I0229 02:50:13.316694  374821 main.go:141] libmachine: (newest-cni-052502) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 02:50:13.316705  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 02:50:13.316713  374821 main.go:141] libmachine: (newest-cni-052502) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 02:50:13.316725  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home/jenkins
	I0229 02:50:13.316735  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Checking permissions on dir: /home
	I0229 02:50:13.316744  374821 main.go:141] libmachine: (newest-cni-052502) Creating domain...
	I0229 02:50:13.316757  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Skipping /home - not owner
	I0229 02:50:13.318576  374821 main.go:141] libmachine: (newest-cni-052502) define libvirt domain using xml: 
	I0229 02:50:13.318599  374821 main.go:141] libmachine: (newest-cni-052502) <domain type='kvm'>
	I0229 02:50:13.318615  374821 main.go:141] libmachine: (newest-cni-052502)   <name>newest-cni-052502</name>
	I0229 02:50:13.318623  374821 main.go:141] libmachine: (newest-cni-052502)   <memory unit='MiB'>2200</memory>
	I0229 02:50:13.318632  374821 main.go:141] libmachine: (newest-cni-052502)   <vcpu>2</vcpu>
	I0229 02:50:13.318640  374821 main.go:141] libmachine: (newest-cni-052502)   <features>
	I0229 02:50:13.318647  374821 main.go:141] libmachine: (newest-cni-052502)     <acpi/>
	I0229 02:50:13.318654  374821 main.go:141] libmachine: (newest-cni-052502)     <apic/>
	I0229 02:50:13.318674  374821 main.go:141] libmachine: (newest-cni-052502)     <pae/>
	I0229 02:50:13.318687  374821 main.go:141] libmachine: (newest-cni-052502)     
	I0229 02:50:13.318694  374821 main.go:141] libmachine: (newest-cni-052502)   </features>
	I0229 02:50:13.318703  374821 main.go:141] libmachine: (newest-cni-052502)   <cpu mode='host-passthrough'>
	I0229 02:50:13.318711  374821 main.go:141] libmachine: (newest-cni-052502)   
	I0229 02:50:13.318717  374821 main.go:141] libmachine: (newest-cni-052502)   </cpu>
	I0229 02:50:13.318724  374821 main.go:141] libmachine: (newest-cni-052502)   <os>
	I0229 02:50:13.318730  374821 main.go:141] libmachine: (newest-cni-052502)     <type>hvm</type>
	I0229 02:50:13.318739  374821 main.go:141] libmachine: (newest-cni-052502)     <boot dev='cdrom'/>
	I0229 02:50:13.318746  374821 main.go:141] libmachine: (newest-cni-052502)     <boot dev='hd'/>
	I0229 02:50:13.318755  374821 main.go:141] libmachine: (newest-cni-052502)     <bootmenu enable='no'/>
	I0229 02:50:13.318761  374821 main.go:141] libmachine: (newest-cni-052502)   </os>
	I0229 02:50:13.318771  374821 main.go:141] libmachine: (newest-cni-052502)   <devices>
	I0229 02:50:13.318779  374821 main.go:141] libmachine: (newest-cni-052502)     <disk type='file' device='cdrom'>
	I0229 02:50:13.318800  374821 main.go:141] libmachine: (newest-cni-052502)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/boot2docker.iso'/>
	I0229 02:50:13.318808  374821 main.go:141] libmachine: (newest-cni-052502)       <target dev='hdc' bus='scsi'/>
	I0229 02:50:13.318816  374821 main.go:141] libmachine: (newest-cni-052502)       <readonly/>
	I0229 02:50:13.318823  374821 main.go:141] libmachine: (newest-cni-052502)     </disk>
	I0229 02:50:13.318831  374821 main.go:141] libmachine: (newest-cni-052502)     <disk type='file' device='disk'>
	I0229 02:50:13.318839  374821 main.go:141] libmachine: (newest-cni-052502)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 02:50:13.318852  374821 main.go:141] libmachine: (newest-cni-052502)       <source file='/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/newest-cni-052502.rawdisk'/>
	I0229 02:50:13.318860  374821 main.go:141] libmachine: (newest-cni-052502)       <target dev='hda' bus='virtio'/>
	I0229 02:50:13.318880  374821 main.go:141] libmachine: (newest-cni-052502)     </disk>
	I0229 02:50:13.318888  374821 main.go:141] libmachine: (newest-cni-052502)     <interface type='network'>
	I0229 02:50:13.318897  374821 main.go:141] libmachine: (newest-cni-052502)       <source network='mk-newest-cni-052502'/>
	I0229 02:50:13.318904  374821 main.go:141] libmachine: (newest-cni-052502)       <model type='virtio'/>
	I0229 02:50:13.318912  374821 main.go:141] libmachine: (newest-cni-052502)     </interface>
	I0229 02:50:13.318919  374821 main.go:141] libmachine: (newest-cni-052502)     <interface type='network'>
	I0229 02:50:13.318929  374821 main.go:141] libmachine: (newest-cni-052502)       <source network='default'/>
	I0229 02:50:13.318936  374821 main.go:141] libmachine: (newest-cni-052502)       <model type='virtio'/>
	I0229 02:50:13.318944  374821 main.go:141] libmachine: (newest-cni-052502)     </interface>
	I0229 02:50:13.318951  374821 main.go:141] libmachine: (newest-cni-052502)     <serial type='pty'>
	I0229 02:50:13.318961  374821 main.go:141] libmachine: (newest-cni-052502)       <target port='0'/>
	I0229 02:50:13.318968  374821 main.go:141] libmachine: (newest-cni-052502)     </serial>
	I0229 02:50:13.318977  374821 main.go:141] libmachine: (newest-cni-052502)     <console type='pty'>
	I0229 02:50:13.318985  374821 main.go:141] libmachine: (newest-cni-052502)       <target type='serial' port='0'/>
	I0229 02:50:13.318993  374821 main.go:141] libmachine: (newest-cni-052502)     </console>
	I0229 02:50:13.318999  374821 main.go:141] libmachine: (newest-cni-052502)     <rng model='virtio'>
	I0229 02:50:13.319009  374821 main.go:141] libmachine: (newest-cni-052502)       <backend model='random'>/dev/random</backend>
	I0229 02:50:13.319015  374821 main.go:141] libmachine: (newest-cni-052502)     </rng>
	I0229 02:50:13.319026  374821 main.go:141] libmachine: (newest-cni-052502)     
	I0229 02:50:13.319032  374821 main.go:141] libmachine: (newest-cni-052502)     
	I0229 02:50:13.319040  374821 main.go:141] libmachine: (newest-cni-052502)   </devices>
	I0229 02:50:13.319046  374821 main.go:141] libmachine: (newest-cni-052502) </domain>
	I0229 02:50:13.319058  374821 main.go:141] libmachine: (newest-cni-052502) 
	I0229 02:50:13.324078  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:18:95:ba in network default
	I0229 02:50:13.324819  374821 main.go:141] libmachine: (newest-cni-052502) Ensuring networks are active...
	I0229 02:50:13.324849  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:13.325621  374821 main.go:141] libmachine: (newest-cni-052502) Ensuring network default is active
	I0229 02:50:13.325979  374821 main.go:141] libmachine: (newest-cni-052502) Ensuring network mk-newest-cni-052502 is active
	I0229 02:50:13.326496  374821 main.go:141] libmachine: (newest-cni-052502) Getting domain xml...
	I0229 02:50:13.327307  374821 main.go:141] libmachine: (newest-cni-052502) Creating domain...
	I0229 02:50:14.598130  374821 main.go:141] libmachine: (newest-cni-052502) Waiting to get IP...
	I0229 02:50:14.598866  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:14.599421  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:14.599488  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:14.599435  374844 retry.go:31] will retry after 215.994494ms: waiting for machine to come up
	I0229 02:50:14.817032  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:14.817566  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:14.817596  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:14.817521  374844 retry.go:31] will retry after 376.066204ms: waiting for machine to come up
	I0229 02:50:15.195070  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:15.195620  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:15.195685  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:15.195571  374844 retry.go:31] will retry after 368.532388ms: waiting for machine to come up
	I0229 02:50:15.566245  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:15.566737  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:15.566760  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:15.566696  374844 retry.go:31] will retry after 443.886219ms: waiting for machine to come up
	I0229 02:50:16.012395  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:16.012863  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:16.012892  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:16.012805  374844 retry.go:31] will retry after 690.20974ms: waiting for machine to come up
	I0229 02:50:16.704458  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:16.704840  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:16.704869  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:16.704800  374844 retry.go:31] will retry after 678.534797ms: waiting for machine to come up
	I0229 02:50:17.384591  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:17.385072  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:17.385111  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:17.385072  374844 retry.go:31] will retry after 1.034211028s: waiting for machine to come up
	I0229 02:50:18.420604  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:18.421111  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:18.421142  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:18.421049  374844 retry.go:31] will retry after 1.07674173s: waiting for machine to come up
	I0229 02:50:19.499142  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:19.499549  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:19.499572  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:19.499515  374844 retry.go:31] will retry after 1.407577159s: waiting for machine to come up
	I0229 02:50:20.908904  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:20.909346  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:20.909371  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:20.909293  374844 retry.go:31] will retry after 1.560987942s: waiting for machine to come up
	I0229 02:50:22.471531  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:22.472048  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:22.472077  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:22.472028  374844 retry.go:31] will retry after 2.683754954s: waiting for machine to come up
	I0229 02:50:25.158729  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:25.159283  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:25.159316  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:25.159203  374844 retry.go:31] will retry after 3.064755607s: waiting for machine to come up
	I0229 02:50:28.226168  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:28.226598  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:28.226627  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:28.226556  374844 retry.go:31] will retry after 2.893942808s: waiting for machine to come up
	I0229 02:50:31.123258  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:31.123692  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:50:31.123717  374821 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:50:31.123647  374844 retry.go:31] will retry after 3.539127651s: waiting for machine to come up
	I0229 02:50:34.664083  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.664596  374821 main.go:141] libmachine: (newest-cni-052502) Found IP for machine: 192.168.39.3
	I0229 02:50:34.664624  374821 main.go:141] libmachine: (newest-cni-052502) Reserving static IP address...
	I0229 02:50:34.664639  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has current primary IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.665091  374821 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find host DHCP lease matching {name: "newest-cni-052502", mac: "52:54:00:19:fc:ef", ip: "192.168.39.3"} in network mk-newest-cni-052502
	I0229 02:50:34.743587  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Getting to WaitForSSH function...
	I0229 02:50:34.743625  374821 main.go:141] libmachine: (newest-cni-052502) Reserved static IP address: 192.168.39.3
	I0229 02:50:34.743671  374821 main.go:141] libmachine: (newest-cni-052502) Waiting for SSH to be available...
	I0229 02:50:34.746635  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.747103  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:34.747134  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.747267  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Using SSH client type: external
	I0229 02:50:34.747284  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa (-rw-------)
	I0229 02:50:34.747370  374821 main.go:141] libmachine: (newest-cni-052502) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:50:34.747406  374821 main.go:141] libmachine: (newest-cni-052502) DBG | About to run SSH command:
	I0229 02:50:34.747425  374821 main.go:141] libmachine: (newest-cni-052502) DBG | exit 0
	I0229 02:50:34.874573  374821 main.go:141] libmachine: (newest-cni-052502) DBG | SSH cmd err, output: <nil>: 
	I0229 02:50:34.874787  374821 main.go:141] libmachine: (newest-cni-052502) KVM machine creation complete!
	I0229 02:50:34.875135  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetConfigRaw
	I0229 02:50:34.875786  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:34.875994  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:34.876202  374821 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 02:50:34.876231  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetState
	I0229 02:50:34.877701  374821 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 02:50:34.877715  374821 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 02:50:34.877721  374821 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 02:50:34.877727  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:34.880516  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.880934  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:34.880964  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.881078  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:34.881275  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:34.881461  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:34.881638  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:34.881830  374821 main.go:141] libmachine: Using SSH client type: native
	I0229 02:50:34.882085  374821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:50:34.882103  374821 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 02:50:34.990136  374821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:50:34.990162  374821 main.go:141] libmachine: Detecting the provisioner...
	I0229 02:50:34.990170  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:34.993087  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.993469  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:34.993499  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:34.993676  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:34.993908  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:34.994125  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:34.994324  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:34.994533  374821 main.go:141] libmachine: Using SSH client type: native
	I0229 02:50:34.994716  374821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:50:34.994728  374821 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 02:50:35.103711  374821 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 02:50:35.103786  374821 main.go:141] libmachine: found compatible host: buildroot
	I0229 02:50:35.103796  374821 main.go:141] libmachine: Provisioning with buildroot...
	I0229 02:50:35.103807  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetMachineName
	I0229 02:50:35.104080  374821 buildroot.go:166] provisioning hostname "newest-cni-052502"
	I0229 02:50:35.104112  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetMachineName
	I0229 02:50:35.104308  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:35.106944  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.107339  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.107387  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.107547  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:35.107753  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.107933  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.108092  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:35.108276  374821 main.go:141] libmachine: Using SSH client type: native
	I0229 02:50:35.108478  374821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:50:35.108491  374821 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-052502 && echo "newest-cni-052502" | sudo tee /etc/hostname
	I0229 02:50:35.231050  374821 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-052502
	
	I0229 02:50:35.231078  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:35.233979  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.234364  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.234408  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.234578  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:35.234761  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.234958  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.235084  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:35.235218  374821 main.go:141] libmachine: Using SSH client type: native
	I0229 02:50:35.235457  374821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:50:35.235485  374821 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-052502' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-052502/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-052502' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:50:35.354365  374821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:50:35.354450  374821 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:50:35.354484  374821 buildroot.go:174] setting up certificates
	I0229 02:50:35.354499  374821 provision.go:83] configureAuth start
	I0229 02:50:35.354516  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetMachineName
	I0229 02:50:35.354826  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetIP
	I0229 02:50:35.357855  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.358305  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.358332  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.358504  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:35.361003  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.361391  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.361434  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.361571  374821 provision.go:138] copyHostCerts
	I0229 02:50:35.361636  374821 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:50:35.361667  374821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:50:35.361778  374821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:50:35.361876  374821 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:50:35.361885  374821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:50:35.361915  374821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:50:35.361965  374821 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:50:35.361972  374821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:50:35.361992  374821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:50:35.362032  374821 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.newest-cni-052502 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube newest-cni-052502]
	I0229 02:50:35.448234  374821 provision.go:172] copyRemoteCerts
	I0229 02:50:35.448294  374821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:50:35.448320  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:35.451286  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.451606  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.451634  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.451848  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:35.452011  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.452183  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:35.452285  374821 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa Username:docker}
	I0229 02:50:35.535100  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:50:35.562514  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 02:50:35.589505  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:50:35.617178  374821 provision.go:86] duration metric: configureAuth took 262.644629ms
	I0229 02:50:35.617208  374821 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:50:35.617427  374821 config.go:182] Loaded profile config "newest-cni-052502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:50:35.617557  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:35.620493  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.620888  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.620918  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.621073  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:35.621298  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.621492  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.621644  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:35.621847  374821 main.go:141] libmachine: Using SSH client type: native
	I0229 02:50:35.622006  374821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:50:35.622019  374821 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:50:35.921401  374821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:50:35.921430  374821 main.go:141] libmachine: Checking connection to Docker...
	I0229 02:50:35.921441  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetURL
	I0229 02:50:35.922795  374821 main.go:141] libmachine: (newest-cni-052502) DBG | Using libvirt version 6000000
	I0229 02:50:35.925270  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.925740  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.925771  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.925975  374821 main.go:141] libmachine: Docker is up and running!
	I0229 02:50:35.925989  374821 main.go:141] libmachine: Reticulating splines...
	I0229 02:50:35.926003  374821 client.go:171] LocalClient.Create took 23.10484651s
	I0229 02:50:35.926025  374821 start.go:167] duration metric: libmachine.API.Create for "newest-cni-052502" took 23.104933145s
	I0229 02:50:35.926037  374821 start.go:300] post-start starting for "newest-cni-052502" (driver="kvm2")
	I0229 02:50:35.926053  374821 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:50:35.926073  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:35.926346  374821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:50:35.926373  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:35.928805  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.929093  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:35.929131  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:35.929265  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:35.929446  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:35.929620  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:35.929746  374821 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa Username:docker}
	I0229 02:50:36.015843  374821 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:50:36.021514  374821 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:50:36.021544  374821 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:50:36.021626  374821 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:50:36.021721  374821 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:50:36.021845  374821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:50:36.033720  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:50:36.060682  374821 start.go:303] post-start completed in 134.629361ms
	I0229 02:50:36.060745  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetConfigRaw
	I0229 02:50:36.061481  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetIP
	I0229 02:50:36.064372  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.064760  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:36.064786  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.064994  374821 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/config.json ...
	I0229 02:50:36.065225  374821 start.go:128] duration metric: createHost completed in 23.262235057s
	I0229 02:50:36.065254  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:36.067542  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.067865  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:36.067895  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.068030  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:36.068242  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:36.068443  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:36.068610  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:36.068812  374821 main.go:141] libmachine: Using SSH client type: native
	I0229 02:50:36.068970  374821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:50:36.068983  374821 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:50:36.172256  374821 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709175036.151249194
	
	I0229 02:50:36.172282  374821 fix.go:206] guest clock: 1709175036.151249194
	I0229 02:50:36.172292  374821 fix.go:219] Guest: 2024-02-29 02:50:36.151249194 +0000 UTC Remote: 2024-02-29 02:50:36.065240506 +0000 UTC m=+23.389219830 (delta=86.008688ms)
	I0229 02:50:36.172329  374821 fix.go:190] guest clock delta is within tolerance: 86.008688ms
	I0229 02:50:36.172337  374821 start.go:83] releasing machines lock for "newest-cni-052502", held for 23.369440418s
	I0229 02:50:36.172375  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:36.172697  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetIP
	I0229 02:50:36.175448  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.175853  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:36.175893  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.176101  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:36.176653  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:36.176845  374821 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:50:36.176946  374821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:50:36.176999  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:36.177075  374821 ssh_runner.go:195] Run: cat /version.json
	I0229 02:50:36.177101  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:50:36.179667  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.179899  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.180055  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:36.180076  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.180321  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:36.180331  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:36.180348  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:36.180502  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:36.180582  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:50:36.180689  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:36.180701  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:50:36.180790  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:50:36.180854  374821 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa Username:docker}
	I0229 02:50:36.180980  374821 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa Username:docker}
	I0229 02:50:36.259768  374821 ssh_runner.go:195] Run: systemctl --version
	I0229 02:50:36.286936  374821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:50:36.458813  374821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:50:36.468277  374821 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:50:36.468461  374821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:50:36.488655  374821 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:50:36.488677  374821 start.go:475] detecting cgroup driver to use...
	I0229 02:50:36.488733  374821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:50:36.508407  374821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:50:36.523751  374821 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:50:36.523802  374821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:50:36.538616  374821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:50:36.553605  374821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:50:36.680629  374821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:50:36.839915  374821 docker.go:233] disabling docker service ...
	I0229 02:50:36.840012  374821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:50:36.857071  374821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:50:36.873581  374821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:50:37.029715  374821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:50:37.165022  374821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:50:37.182206  374821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:50:37.204760  374821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:50:37.204818  374821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:50:37.216139  374821 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:50:37.216196  374821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:50:37.227926  374821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:50:37.239374  374821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:50:37.251162  374821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:50:37.265218  374821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:50:37.276719  374821 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:50:37.276777  374821 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:50:37.292624  374821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:50:37.304108  374821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:50:37.454946  374821 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:50:37.628940  374821 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:50:37.629029  374821 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:50:37.635522  374821 start.go:543] Will wait 60s for crictl version
	I0229 02:50:37.635581  374821 ssh_runner.go:195] Run: which crictl
	I0229 02:50:37.639943  374821 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:50:37.686192  374821 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:50:37.686305  374821 ssh_runner.go:195] Run: crio --version
	I0229 02:50:37.718681  374821 ssh_runner.go:195] Run: crio --version
	I0229 02:50:37.750993  374821 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 02:50:37.752560  374821 main.go:141] libmachine: (newest-cni-052502) Calling .GetIP
	I0229 02:50:37.755351  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:37.755728  374821 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:50:28 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:50:37.755759  374821 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:50:37.755981  374821 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:50:37.760789  374821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:50:37.776559  374821 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0229 02:50:37.777821  374821 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:50:37.777885  374821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:50:37.824282  374821 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 02:50:37.824391  374821 ssh_runner.go:195] Run: which lz4
	I0229 02:50:37.828981  374821 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:50:37.833913  374821 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:50:37.833942  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0229 02:50:39.508553  374821 crio.go:444] Took 1.679606 seconds to copy over tarball
	I0229 02:50:39.508632  374821 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:50:42.103341  374821 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.594680261s)
	I0229 02:50:42.103378  374821 crio.go:451] Took 2.594798 seconds to extract the tarball
	I0229 02:50:42.103400  374821 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:50:42.143748  374821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:50:42.195268  374821 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:50:42.195294  374821 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:50:42.195385  374821 ssh_runner.go:195] Run: crio config
	I0229 02:50:42.244619  374821 cni.go:84] Creating CNI manager for ""
	I0229 02:50:42.244647  374821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:50:42.244680  374821 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0229 02:50:42.244705  374821 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-052502 NodeName:newest-cni-052502 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:m
ap[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:50:42.244865  374821 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-052502"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:50:42.244969  374821 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-052502 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-052502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:50:42.245029  374821 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 02:50:42.257131  374821 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:50:42.257228  374821 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:50:42.268835  374821 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (417 bytes)
	I0229 02:50:42.287405  374821 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 02:50:42.306094  374821 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I0229 02:50:42.324756  374821 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0229 02:50:42.329291  374821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:50:42.343936  374821 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502 for IP: 192.168.39.3
	I0229 02:50:42.343971  374821 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:42.344140  374821 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:50:42.344185  374821 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:50:42.344228  374821 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/client.key
	I0229 02:50:42.344242  374821 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/client.crt with IP's: []
	I0229 02:50:42.601465  374821 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/client.crt ...
	I0229 02:50:42.601496  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/client.crt: {Name:mkb10a9b350cb6b477d3d1773e938be9b48f7e3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:42.601699  374821 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/client.key ...
	I0229 02:50:42.601718  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/client.key: {Name:mk3e2f519fc0812b0283e9892363152739bfbc85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:42.601838  374821 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key.599d509e
	I0229 02:50:42.601861  374821 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.crt.599d509e with IP's: [192.168.39.3 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 02:50:42.823809  374821 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.crt.599d509e ...
	I0229 02:50:42.823844  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.crt.599d509e: {Name:mkd5e9be92674d8ea2ea382e6a4a491444219f11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:42.824043  374821 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key.599d509e ...
	I0229 02:50:42.824065  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key.599d509e: {Name:mkcca81814e862bc1f0c2c52937f10fd2433a80c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:42.824169  374821 certs.go:337] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.crt.599d509e -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.crt
	I0229 02:50:42.824271  374821 certs.go:341] copying /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key.599d509e -> /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key
	I0229 02:50:42.824338  374821 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.key
	I0229 02:50:42.824354  374821 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.crt with IP's: []
	I0229 02:50:43.135550  374821 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.crt ...
	I0229 02:50:43.135592  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.crt: {Name:mk624109b6a026719ae142b1084c83c24f8b99cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:43.135793  374821 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.key ...
	I0229 02:50:43.135813  374821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.key: {Name:mkfe74020102b9bd2320bc39d9b1f38c0d6d358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:50:43.136024  374821 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:50:43.136081  374821 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:50:43.136096  374821 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:50:43.136142  374821 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:50:43.136176  374821 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:50:43.136207  374821 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:50:43.136274  374821 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:50:43.137156  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:50:43.169150  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:50:43.198636  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:50:43.227778  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:50:43.257237  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:50:43.284632  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:50:43.315651  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:50:43.344242  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:50:43.374619  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:50:43.404734  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:50:43.433574  374821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:50:43.463696  374821 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:50:43.483424  374821 ssh_runner.go:195] Run: openssl version
	I0229 02:50:43.490700  374821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:50:43.505018  374821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:50:43.510881  374821 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:50:43.510960  374821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:50:43.517601  374821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:50:43.531800  374821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:50:43.545585  374821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:50:43.551076  374821 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:50:43.551142  374821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:50:43.557446  374821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:50:43.571494  374821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:50:43.586705  374821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:50:43.592762  374821 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:50:43.592832  374821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:50:43.599715  374821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:50:43.614375  374821 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:50:43.619550  374821 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 02:50:43.619617  374821 kubeadm.go:404] StartCluster: {Name:newest-cni-052502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-052502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:50:43.619741  374821 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:50:43.619802  374821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:50:43.671528  374821 cri.go:89] found id: ""
	I0229 02:50:43.671600  374821 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:50:43.683402  374821 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:50:43.695131  374821 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:50:43.706837  374821 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:50:43.706886  374821 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:50:43.821968  374821 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0229 02:50:43.822208  374821 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:50:44.074143  374821 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:50:44.074267  374821 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:50:44.074392  374821 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:50:44.340767  374821 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:50:44.433518  374821 out.go:204]   - Generating certificates and keys ...
	I0229 02:50:44.433653  374821 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:50:44.433752  374821 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:50:44.675166  374821 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 02:50:45.102325  374821 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 02:50:45.204610  374821 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 02:50:45.404744  374821 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 02:50:45.581501  374821 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 02:50:45.581878  374821 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-052502] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0229 02:50:45.903152  374821 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 02:50:45.903351  374821 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-052502] and IPs [192.168.39.3 127.0.0.1 ::1]
	I0229 02:50:46.048573  374821 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 02:50:46.125364  374821 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 02:50:46.271627  374821 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 02:50:46.271799  374821 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:50:46.393572  374821 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:50:46.671199  374821 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0229 02:50:46.843556  374821 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:50:47.035187  374821 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:50:47.188068  374821 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:50:47.188632  374821 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:50:47.193027  374821 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:50:47.194675  374821 out.go:204]   - Booting up control plane ...
	I0229 02:50:47.194778  374821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:50:47.195317  374821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:50:47.196229  374821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:50:47.220819  374821 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:50:47.221809  374821 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:50:47.222004  374821 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:50:47.369164  374821 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:50:53.874119  374821 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.505523 seconds
	I0229 02:50:53.887577  374821 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:50:53.908007  374821 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:50:54.436179  374821 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:50:54.436359  374821 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-052502 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:50:54.951291  374821 kubeadm.go:322] [bootstrap-token] Using token: 3uc7k0.6v7bmv9oqlgvi9fu
	I0229 02:50:54.952953  374821 out.go:204]   - Configuring RBAC rules ...
	I0229 02:50:54.953110  374821 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:50:54.958013  374821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:50:54.965240  374821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:50:54.968330  374821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:50:54.972021  374821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:50:54.978285  374821 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:50:54.989056  374821 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:50:55.293061  374821 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:50:55.364115  374821 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:50:55.365120  374821 kubeadm.go:322] 
	I0229 02:50:55.365208  374821 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:50:55.365231  374821 kubeadm.go:322] 
	I0229 02:50:55.365319  374821 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:50:55.365332  374821 kubeadm.go:322] 
	I0229 02:50:55.365377  374821 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:50:55.365477  374821 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:50:55.365563  374821 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:50:55.365574  374821 kubeadm.go:322] 
	I0229 02:50:55.365650  374821 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:50:55.365662  374821 kubeadm.go:322] 
	I0229 02:50:55.365749  374821 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:50:55.365760  374821 kubeadm.go:322] 
	I0229 02:50:55.365826  374821 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:50:55.365932  374821 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:50:55.365993  374821 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:50:55.366018  374821 kubeadm.go:322] 
	I0229 02:50:55.366144  374821 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:50:55.366257  374821 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:50:55.366269  374821 kubeadm.go:322] 
	I0229 02:50:55.366370  374821 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3uc7k0.6v7bmv9oqlgvi9fu \
	I0229 02:50:55.366496  374821 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 \
	I0229 02:50:55.366524  374821 kubeadm.go:322] 	--control-plane 
	I0229 02:50:55.366533  374821 kubeadm.go:322] 
	I0229 02:50:55.366653  374821 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:50:55.366670  374821 kubeadm.go:322] 
	I0229 02:50:55.366763  374821 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3uc7k0.6v7bmv9oqlgvi9fu \
	I0229 02:50:55.366922  374821 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 
	I0229 02:50:55.374276  374821 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:50:55.374305  374821 cni.go:84] Creating CNI manager for ""
	I0229 02:50:55.374315  374821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:50:55.375918  374821 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:50:55.377093  374821 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:50:55.481222  374821 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:50:55.549469  374821 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:50:55.549617  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:50:55.549617  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=newest-cni-052502 minikube.k8s.io/updated_at=2024_02_29T02_50_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:50:55.592564  374821 ops.go:34] apiserver oom_adj: -16
	I0229 02:50:55.811151  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:50:56.312017  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:50:56.811251  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:50:57.312113  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:50:57.812175  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:50:58.311593  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:50:58.811896  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:50:59.311271  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:50:59.811510  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:51:00.311500  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:51:00.812217  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:51:01.312107  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:51:01.811481  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:51:02.311505  374821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.669499206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175067668951765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de25f3ad-7145-4496-856b-6d5092fed393 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.671479252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d35f176-401e-460b-91c1-88bf204ac985 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.671561709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d35f176-401e-460b-91c1-88bf204ac985 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.671969239Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173950902550234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8e99e2123d2e5303af936d009927f675a0330fa1d562d04d91c9671e72447a,PodSandboxId:aa7a19621db15a31f3aa5741180f7d09a6558bcc6010fa4af6e04ceaf75df77c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173931049012093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d069c34-3c34-4c30-8698-681e749d7fa4,},Annotations:map[string]string{io.kubernetes.container.hash: 831f7d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c,PodSandboxId:f2c20e2d5e60bf3c023423ccddc6b75295a3b089f89c8ad85ca9b1902c9d2f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709173927679140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kt28m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf7edc3-f4db-4d5e-ad63-ccbec64dfac4,},Annotations:map[string]string{io.kubernetes.container.hash: afa5b1b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb,PodSandboxId:a6fa5f96ebc2b6bfc6a42ad60ce69b9cf970592fa8affcdf705599b5d48cb1e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709173920277568195,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tt7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e8eb713-a0cf-49f3-b93
d-7493a9d763ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6877a072,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173920158436585,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051
a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d,PodSandboxId:06b1c1143ab74d6ef4e77750f790d1cd89c4c65439fa46ec5c5af993e444686f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709173916416971661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc78ffe0316f227b9b3d46d2ef42ba84,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121,PodSandboxId:aea9eb46829490f648972ab7e94364c7a87dd955b384c49407b4e4e2173ac9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709173916342842598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3606a01af513b0463e4c406da47efcb1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1457bf1e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa,PodSandboxId:35963b598dc6746414efd7f05f463a13fad12a5d48a4911a670ad20ab49f5dfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709173916378486179,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef27af45952a1a1a128b1cf3b7799f57,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226,PodSandboxId:d508fb8be975e1491a80508cab4e25dd1cbfd71f0385f51d254beada0cdf62c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709173916289625855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b65843d5609ea16863ebad71b39fd309,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: bf6905e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d35f176-401e-460b-91c1-88bf204ac985 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.726272272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0277198d-b173-4fdf-8158-3fa2ba5132d5 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.726351526Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0277198d-b173-4fdf-8158-3fa2ba5132d5 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.728006379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bdef30b8-4fb3-4c34-8d28-03ce2d4f5978 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.728448295Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175067728423016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bdef30b8-4fb3-4c34-8d28-03ce2d4f5978 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.729114300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87485342-5687-4e38-a716-e8e662abaafe name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.729185752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87485342-5687-4e38-a716-e8e662abaafe name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.730060586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173950902550234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8e99e2123d2e5303af936d009927f675a0330fa1d562d04d91c9671e72447a,PodSandboxId:aa7a19621db15a31f3aa5741180f7d09a6558bcc6010fa4af6e04ceaf75df77c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173931049012093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d069c34-3c34-4c30-8698-681e749d7fa4,},Annotations:map[string]string{io.kubernetes.container.hash: 831f7d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c,PodSandboxId:f2c20e2d5e60bf3c023423ccddc6b75295a3b089f89c8ad85ca9b1902c9d2f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709173927679140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kt28m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf7edc3-f4db-4d5e-ad63-ccbec64dfac4,},Annotations:map[string]string{io.kubernetes.container.hash: afa5b1b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb,PodSandboxId:a6fa5f96ebc2b6bfc6a42ad60ce69b9cf970592fa8affcdf705599b5d48cb1e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709173920277568195,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tt7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e8eb713-a0cf-49f3-b93
d-7493a9d763ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6877a072,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173920158436585,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051
a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d,PodSandboxId:06b1c1143ab74d6ef4e77750f790d1cd89c4c65439fa46ec5c5af993e444686f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709173916416971661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc78ffe0316f227b9b3d46d2ef42ba84,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121,PodSandboxId:aea9eb46829490f648972ab7e94364c7a87dd955b384c49407b4e4e2173ac9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709173916342842598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3606a01af513b0463e4c406da47efcb1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1457bf1e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa,PodSandboxId:35963b598dc6746414efd7f05f463a13fad12a5d48a4911a670ad20ab49f5dfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709173916378486179,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef27af45952a1a1a128b1cf3b7799f57,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226,PodSandboxId:d508fb8be975e1491a80508cab4e25dd1cbfd71f0385f51d254beada0cdf62c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709173916289625855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b65843d5609ea16863ebad71b39fd309,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: bf6905e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87485342-5687-4e38-a716-e8e662abaafe name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.783920940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=830d2b0b-0069-430f-a417-7357a4649925 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.784020155Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=830d2b0b-0069-430f-a417-7357a4649925 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.785095051Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b66d48c-2c2c-481a-b8b7-5bffd25cdfea name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.785628581Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175067785604712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b66d48c-2c2c-481a-b8b7-5bffd25cdfea name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.786349507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9cc0c2b5-c07b-4e3a-814a-37546ef97753 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.786432228Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9cc0c2b5-c07b-4e3a-814a-37546ef97753 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.786644560Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173950902550234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8e99e2123d2e5303af936d009927f675a0330fa1d562d04d91c9671e72447a,PodSandboxId:aa7a19621db15a31f3aa5741180f7d09a6558bcc6010fa4af6e04ceaf75df77c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173931049012093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d069c34-3c34-4c30-8698-681e749d7fa4,},Annotations:map[string]string{io.kubernetes.container.hash: 831f7d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c,PodSandboxId:f2c20e2d5e60bf3c023423ccddc6b75295a3b089f89c8ad85ca9b1902c9d2f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709173927679140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kt28m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf7edc3-f4db-4d5e-ad63-ccbec64dfac4,},Annotations:map[string]string{io.kubernetes.container.hash: afa5b1b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb,PodSandboxId:a6fa5f96ebc2b6bfc6a42ad60ce69b9cf970592fa8affcdf705599b5d48cb1e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709173920277568195,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tt7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e8eb713-a0cf-49f3-b93
d-7493a9d763ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6877a072,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173920158436585,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051
a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d,PodSandboxId:06b1c1143ab74d6ef4e77750f790d1cd89c4c65439fa46ec5c5af993e444686f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709173916416971661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc78ffe0316f227b9b3d46d2ef42ba84,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121,PodSandboxId:aea9eb46829490f648972ab7e94364c7a87dd955b384c49407b4e4e2173ac9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709173916342842598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3606a01af513b0463e4c406da47efcb1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1457bf1e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa,PodSandboxId:35963b598dc6746414efd7f05f463a13fad12a5d48a4911a670ad20ab49f5dfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709173916378486179,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef27af45952a1a1a128b1cf3b7799f57,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226,PodSandboxId:d508fb8be975e1491a80508cab4e25dd1cbfd71f0385f51d254beada0cdf62c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709173916289625855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b65843d5609ea16863ebad71b39fd309,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: bf6905e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9cc0c2b5-c07b-4e3a-814a-37546ef97753 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.829758463Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=157ce620-9e0a-4d79-84a9-a5b7869009b7 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.829898574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=157ce620-9e0a-4d79-84a9-a5b7869009b7 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.831226785Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d957f533-ffce-4364-b6ce-56aa09d2b75f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.831905349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175067831869452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d957f533-ffce-4364-b6ce-56aa09d2b75f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.832959260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c95fa4b0-f5de-4f84-bb42-fe0e039d2f41 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.833081490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c95fa4b0-f5de-4f84-bb42-fe0e039d2f41 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:51:07 embed-certs-915633 crio[680]: time="2024-02-29 02:51:07.833452628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709173950902550234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d8e99e2123d2e5303af936d009927f675a0330fa1d562d04d91c9671e72447a,PodSandboxId:aa7a19621db15a31f3aa5741180f7d09a6558bcc6010fa4af6e04ceaf75df77c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1709173931049012093,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3d069c34-3c34-4c30-8698-681e749d7fa4,},Annotations:map[string]string{io.kubernetes.container.hash: 831f7d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c,PodSandboxId:f2c20e2d5e60bf3c023423ccddc6b75295a3b089f89c8ad85ca9b1902c9d2f7b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709173927679140357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-kt28m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf7edc3-f4db-4d5e-ad63-ccbec64dfac4,},Annotations:map[string]string{io.kubernetes.container.hash: afa5b1b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb,PodSandboxId:a6fa5f96ebc2b6bfc6a42ad60ce69b9cf970592fa8affcdf705599b5d48cb1e6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709173920277568195,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tt7l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e8eb713-a0cf-49f3-b93
d-7493a9d763ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6877a072,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5,PodSandboxId:326c6ad728613ba82b6f99efab7dd4229d2d431172f37af069d48e2ba3df9a86,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1709173920158436585,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce36b3fe-a726-46f1-a411-c8e26d3b051
a,},Annotations:map[string]string{io.kubernetes.container.hash: 8c99d51f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d,PodSandboxId:06b1c1143ab74d6ef4e77750f790d1cd89c4c65439fa46ec5c5af993e444686f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709173916416971661,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc78ffe0316f227b9b3d46d2ef42ba84,},Annotati
ons:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121,PodSandboxId:aea9eb46829490f648972ab7e94364c7a87dd955b384c49407b4e4e2173ac9e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709173916342842598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3606a01af513b0463e4c406da47efcb1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 1457bf1e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa,PodSandboxId:35963b598dc6746414efd7f05f463a13fad12a5d48a4911a670ad20ab49f5dfd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709173916378486179,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef27af45952a1a1a128b1cf3b7799f57,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226,PodSandboxId:d508fb8be975e1491a80508cab4e25dd1cbfd71f0385f51d254beada0cdf62c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709173916289625855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-915633,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b65843d5609ea16863ebad71b39fd309,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: bf6905e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c95fa4b0-f5de-4f84-bb42-fe0e039d2f41 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5d03e33e30323       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   326c6ad728613       storage-provisioner
	6d8e99e2123d2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   aa7a19621db15       busybox
	6f79a4150c635       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      19 minutes ago      Running             coredns                   1                   f2c20e2d5e60b       coredns-5dd5756b68-kt28m
	8f95a3a0ad6f6       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      19 minutes ago      Running             kube-proxy                1                   a6fa5f96ebc2b       kube-proxy-6tt7l
	4d79154ed71a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   326c6ad728613       storage-provisioner
	57de9d45eaff6       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      19 minutes ago      Running             kube-scheduler            1                   06b1c1143ab74       kube-scheduler-embed-certs-915633
	8fcb33bb23e69       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      19 minutes ago      Running             kube-controller-manager   1                   35963b598dc67       kube-controller-manager-embed-certs-915633
	208354e254f6c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      19 minutes ago      Running             etcd                      1                   aea9eb4682949       etcd-embed-certs-915633
	74bd751559a70       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      19 minutes ago      Running             kube-apiserver            1                   d508fb8be975e       kube-apiserver-embed-certs-915633
	
	
	==> coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60229 - 57940 "HINFO IN 7530651228205597472.1671966392532887046. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014516991s
	
	
	==> describe nodes <==
	Name:               embed-certs-915633
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-915633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=embed-certs-915633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_22_07_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:22:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-915633
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:51:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:47:47 +0000   Thu, 29 Feb 2024 02:22:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:47:47 +0000   Thu, 29 Feb 2024 02:22:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:47:47 +0000   Thu, 29 Feb 2024 02:22:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:47:47 +0000   Thu, 29 Feb 2024 02:32:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.218
	  Hostname:    embed-certs-915633
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 275405a572ea4acf891f83ae3176f9fd
	  System UUID:                275405a5-72ea-4acf-891f-83ae3176f9fd
	  Boot ID:                    b5f53730-80e9-46cc-8959-1f6a4a8b85e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-kt28m                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-915633                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-915633             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-915633    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-6tt7l                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-915633             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-6p7f7               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-915633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-915633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-915633 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-915633 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-915633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-915633 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-915633 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-915633 event: Registered Node embed-certs-915633 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node embed-certs-915633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-915633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node embed-certs-915633 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-915633 event: Registered Node embed-certs-915633 in Controller
	
	
	==> dmesg <==
	[Feb29 02:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.065405] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046527] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.122199] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.454328] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.766420] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.892754] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.063492] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.090998] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.197923] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.149536] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.255637] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[ +17.538360] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.066808] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.044177] kauditd_printk_skb: 84 callbacks suppressed
	[Feb29 02:32] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.425526] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] <==
	{"level":"info","ts":"2024-02-29T02:31:56.965044Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.218:2380"}
	{"level":"info","ts":"2024-02-29T02:31:57.907167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4bfeef2bb38c2b5 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T02:31:57.907265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4bfeef2bb38c2b5 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T02:31:57.907298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4bfeef2bb38c2b5 received MsgPreVoteResp from d4bfeef2bb38c2b5 at term 2"}
	{"level":"info","ts":"2024-02-29T02:31:57.907327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4bfeef2bb38c2b5 became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:57.907351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4bfeef2bb38c2b5 received MsgVoteResp from d4bfeef2bb38c2b5 at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:57.907378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4bfeef2bb38c2b5 became leader at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:57.907403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4bfeef2bb38c2b5 elected leader d4bfeef2bb38c2b5 at term 3"}
	{"level":"info","ts":"2024-02-29T02:31:57.910974Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d4bfeef2bb38c2b5","local-member-attributes":"{Name:embed-certs-915633 ClientURLs:[https://192.168.50.218:2379]}","request-path":"/0/members/d4bfeef2bb38c2b5/attributes","cluster-id":"db562ccfd877cf13","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T02:31:57.911167Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:31:57.911778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T02:31:57.912239Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.218:2379"}
	{"level":"info","ts":"2024-02-29T02:31:57.912649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:31:57.91276Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:31:57.912812Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:41:57.948612Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":864}
	{"level":"info","ts":"2024-02-29T02:41:57.951571Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":864,"took":"2.521412ms","hash":3716348948}
	{"level":"info","ts":"2024-02-29T02:41:57.951654Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3716348948,"revision":864,"compact-revision":-1}
	{"level":"info","ts":"2024-02-29T02:46:57.95682Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1106}
	{"level":"info","ts":"2024-02-29T02:46:57.958596Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1106,"took":"1.295855ms","hash":176776369}
	{"level":"info","ts":"2024-02-29T02:46:57.958889Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":176776369,"revision":1106,"compact-revision":864}
	{"level":"info","ts":"2024-02-29T02:50:44.741935Z","caller":"traceutil/trace.go:171","msg":"trace[42592808] transaction","detail":"{read_only:false; response_revision:1534; number_of_response:1; }","duration":"373.598149ms","start":"2024-02-29T02:50:44.368291Z","end":"2024-02-29T02:50:44.741889Z","steps":["trace[42592808] 'process raft request'  (duration: 373.351465ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T02:50:44.743246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T02:50:44.368277Z","time spent":"373.855719ms","remote":"127.0.0.1:56124","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1533 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-02-29T02:50:44.962899Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.860851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T02:50:44.963083Z","caller":"traceutil/trace.go:171","msg":"trace[1261461791] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1534; }","duration":"131.059148ms","start":"2024-02-29T02:50:44.832011Z","end":"2024-02-29T02:50:44.96307Z","steps":["trace[1261461791] 'range keys from in-memory index tree'  (duration: 130.769959ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:51:08 up 19 min,  0 users,  load average: 0.28, 0.21, 0.18
	Linux embed-certs-915633 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] <==
	W0229 02:47:00.640897       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:47:00.641026       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:47:00.641036       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:47:00.640914       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:47:00.641080       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:47:00.643222       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 02:47:59.484864       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 02:48:00.642052       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:48:00.642139       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:48:00.642151       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:48:00.644515       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:48:00.644569       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:48:00.644576       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 02:48:59.485142       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 02:49:59.485638       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 02:50:00.642326       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:50:00.642411       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:50:00.642418       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:50:00.644969       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:50:00.644997       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:50:00.645003       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 02:50:59.484840       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] <==
	I0229 02:45:13.384200       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:45:42.760617       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:45:43.394405       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:46:12.772569       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:46:13.403873       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:46:42.777518       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:46:43.413181       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:47:12.785461       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:47:13.421531       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:47:42.791174       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:47:43.432223       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:48:12.804885       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:48:13.440343       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 02:48:24.670956       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="317.182µs"
	I0229 02:48:38.676206       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="1.271028ms"
	E0229 02:48:42.810254       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:48:43.448923       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:49:12.816232       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:49:13.457316       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:49:42.821952       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:49:43.466206       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:50:12.837499       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:50:13.481128       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:50:42.843125       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:50:43.490988       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] <==
	I0229 02:32:00.463615       1 server_others.go:69] "Using iptables proxy"
	I0229 02:32:00.474343       1 node.go:141] Successfully retrieved node IP: 192.168.50.218
	I0229 02:32:00.602762       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 02:32:00.602835       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:32:00.605346       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:32:00.605424       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:32:00.605614       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:32:00.606327       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:32:00.607976       1 config.go:188] "Starting service config controller"
	I0229 02:32:00.608058       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:32:00.608620       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:32:00.608863       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:32:00.609470       1 config.go:315] "Starting node config controller"
	I0229 02:32:00.609600       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:32:00.710421       1 shared_informer.go:318] Caches are synced for service config
	I0229 02:32:00.713597       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:32:00.713783       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] <==
	I0229 02:31:57.557919       1 serving.go:348] Generated self-signed cert in-memory
	W0229 02:31:59.578096       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 02:31:59.578186       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 02:31:59.578214       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 02:31:59.578238       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 02:31:59.636838       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 02:31:59.636999       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:31:59.641452       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 02:31:59.643792       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 02:31:59.643911       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 02:31:59.643963       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 02:31:59.745045       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 02:48:55 embed-certs-915633 kubelet[890]: E0229 02:48:55.676455     890 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:48:55 embed-certs-915633 kubelet[890]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:48:55 embed-certs-915633 kubelet[890]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:48:55 embed-certs-915633 kubelet[890]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:48:55 embed-certs-915633 kubelet[890]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:49:04 embed-certs-915633 kubelet[890]: E0229 02:49:04.653397     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:49:15 embed-certs-915633 kubelet[890]: E0229 02:49:15.654255     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:49:26 embed-certs-915633 kubelet[890]: E0229 02:49:26.653807     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:49:41 embed-certs-915633 kubelet[890]: E0229 02:49:41.653374     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:49:55 embed-certs-915633 kubelet[890]: E0229 02:49:55.673463     890 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:49:55 embed-certs-915633 kubelet[890]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:49:55 embed-certs-915633 kubelet[890]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:49:55 embed-certs-915633 kubelet[890]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:49:55 embed-certs-915633 kubelet[890]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:49:56 embed-certs-915633 kubelet[890]: E0229 02:49:56.653965     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:50:10 embed-certs-915633 kubelet[890]: E0229 02:50:10.654171     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:50:23 embed-certs-915633 kubelet[890]: E0229 02:50:23.655316     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:50:37 embed-certs-915633 kubelet[890]: E0229 02:50:37.654255     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:50:49 embed-certs-915633 kubelet[890]: E0229 02:50:49.652961     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	Feb 29 02:50:55 embed-certs-915633 kubelet[890]: E0229 02:50:55.674206     890 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:50:55 embed-certs-915633 kubelet[890]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:50:55 embed-certs-915633 kubelet[890]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:50:55 embed-certs-915633 kubelet[890]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:50:55 embed-certs-915633 kubelet[890]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:51:04 embed-certs-915633 kubelet[890]: E0229 02:51:04.653603     890 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6p7f7" podUID="b1dc8143-2d47-4cea-b4a1-61808350d2d6"
	
	
	==> storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] <==
	I0229 02:32:00.318485       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0229 02:32:30.326383       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] <==
	I0229 02:32:31.006877       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 02:32:31.028038       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 02:32:31.028317       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 02:32:48.433885       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 02:32:48.434575       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b8069b2-6063-4200-a8cc-5f7225a45a09", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-915633_1901a488-ed44-4337-b3ac-01ae11fb0d43 became leader
	I0229 02:32:48.434917       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-915633_1901a488-ed44-4337-b3ac-01ae11fb0d43!
	I0229 02:32:48.535534       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-915633_1901a488-ed44-4337-b3ac-01ae11fb0d43!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-915633 -n embed-certs-915633
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-915633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-6p7f7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-915633 describe pod metrics-server-57f55c9bc5-6p7f7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-915633 describe pod metrics-server-57f55c9bc5-6p7f7: exit status 1 (84.874158ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-6p7f7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-915633 describe pod metrics-server-57f55c9bc5-6p7f7: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (336.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (89.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:49:09.039669  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:49:18.684197  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
E0229 02:49:37.822084  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/custom-flannel-117441/client.crt: no such file or directory
E0229 02:49:37.825240  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.160:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.160:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275488 -n old-k8s-version-275488
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275488 -n old-k8s-version-275488: exit status 2 (251.644434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-275488" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-275488 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-275488 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.093µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-275488 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488: exit status 2 (255.276588ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-275488 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-275488 logs -n 25: (1.723001353s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-117441 sudo cat                              | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo                                  | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo find                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-117441 sudo crio                             | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-117441                                       | bridge-117441                | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	| delete  | -p                                                     | disable-driver-mounts-542968 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:22 UTC |
	|         | disable-driver-mounts-542968                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:23 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-915633            | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247751             | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071485  | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275488        | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-915633                 | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247751                  | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071485       | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:40 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275488             | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:26:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:26:36.132854  370051 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:26:36.133389  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133407  370051 out.go:304] Setting ErrFile to fd 2...
	I0229 02:26:36.133414  370051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:26:36.133912  370051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:26:36.134959  370051 out.go:298] Setting JSON to false
	I0229 02:26:36.135907  370051 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7739,"bootTime":1709165857,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:26:36.135982  370051 start.go:139] virtualization: kvm guest
	I0229 02:26:36.137916  370051 out.go:177] * [old-k8s-version-275488] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:26:36.139510  370051 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:26:36.139543  370051 notify.go:220] Checking for updates...
	I0229 02:26:36.141206  370051 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:26:36.142776  370051 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:26:36.143982  370051 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:26:36.145097  370051 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:26:36.146170  370051 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:26:36.147751  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:26:36.148198  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.148298  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.163969  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0229 02:26:36.164373  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.164977  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.165003  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.165394  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.165584  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.167312  370051 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 02:26:36.168337  370051 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:26:36.168641  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:26:36.168683  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:26:36.184089  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0229 02:26:36.184605  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:26:36.185181  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:26:36.185210  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:26:36.185551  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:26:36.185723  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:26:36.222261  370051 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 02:26:36.223363  370051 start.go:299] selected driver: kvm2
	I0229 02:26:36.223374  370051 start.go:903] validating driver "kvm2" against &{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.223487  370051 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:26:36.224130  370051 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.224195  370051 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:26:36.239302  370051 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:26:36.239664  370051 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 02:26:36.239741  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:26:36.239755  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:26:36.239765  370051 start_flags.go:323] config:
	{Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:26:36.239908  370051 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:26:36.241466  370051 out.go:177] * Starting control plane node old-k8s-version-275488 in cluster old-k8s-version-275488
	I0229 02:26:35.666509  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:38.738602  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:36.242536  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:26:36.242564  370051 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 02:26:36.242573  370051 cache.go:56] Caching tarball of preloaded images
	I0229 02:26:36.242641  370051 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 02:26:36.242651  370051 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 02:26:36.242742  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:26:36.242905  370051 start.go:365] acquiring machines lock for old-k8s-version-275488: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:26:44.818494  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:47.890482  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:53.970508  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:26:57.042448  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:03.122506  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:06.194415  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:12.274520  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:15.346558  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:21.426515  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:24.498557  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:30.578502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:33.650482  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:39.730548  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:42.802507  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:48.882487  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:51.954507  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:27:58.034498  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:01.106530  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:07.186513  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:10.258485  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:16.338519  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:19.410521  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:25.490436  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:28.562555  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:34.642534  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:37.714514  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:43.794519  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:46.866487  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:52.946514  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:28:56.018488  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:02.098512  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:05.170472  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:11.250485  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:14.322454  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:20.402450  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:23.474533  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:29.554541  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:32.626489  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:38.706558  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:41.778502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:47.858493  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:50.930489  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:29:57.010541  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:00.082537  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:06.162498  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:09.234502  369508 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.218:22: connect: no route to host
	I0229 02:30:12.238620  369591 start.go:369] acquired machines lock for "no-preload-247751" in 4m33.303501223s
	I0229 02:30:12.238705  369591 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:12.238716  369591 fix.go:54] fixHost starting: 
	I0229 02:30:12.239171  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:12.239240  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:12.254984  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I0229 02:30:12.255490  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:12.255991  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:30:12.256012  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:12.256463  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:12.256668  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:12.256840  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:30:12.258341  369591 fix.go:102] recreateIfNeeded on no-preload-247751: state=Stopped err=<nil>
	I0229 02:30:12.258371  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	W0229 02:30:12.258522  369591 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:12.260176  369591 out.go:177] * Restarting existing kvm2 VM for "no-preload-247751" ...
	I0229 02:30:12.261521  369591 main.go:141] libmachine: (no-preload-247751) Calling .Start
	I0229 02:30:12.261678  369591 main.go:141] libmachine: (no-preload-247751) Ensuring networks are active...
	I0229 02:30:12.262375  369591 main.go:141] libmachine: (no-preload-247751) Ensuring network default is active
	I0229 02:30:12.262642  369591 main.go:141] libmachine: (no-preload-247751) Ensuring network mk-no-preload-247751 is active
	I0229 02:30:12.262962  369591 main.go:141] libmachine: (no-preload-247751) Getting domain xml...
	I0229 02:30:12.263526  369591 main.go:141] libmachine: (no-preload-247751) Creating domain...
	I0229 02:30:13.474816  369591 main.go:141] libmachine: (no-preload-247751) Waiting to get IP...
	I0229 02:30:13.475810  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:13.476251  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:13.476305  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:13.476230  370599 retry.go:31] will retry after 302.404435ms: waiting for machine to come up
	I0229 02:30:13.780776  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:13.781237  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:13.781265  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:13.781193  370599 retry.go:31] will retry after 364.673363ms: waiting for machine to come up
	I0229 02:30:12.236310  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:12.236352  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:30:12.238426  369508 machine.go:91] provisioned docker machine in 4m37.406828317s
	I0229 02:30:12.238513  369508 fix.go:56] fixHost completed within 4m37.429140371s
	I0229 02:30:12.238526  369508 start.go:83] releasing machines lock for "embed-certs-915633", held for 4m37.429164063s
	W0229 02:30:12.238553  369508 start.go:694] error starting host: provision: host is not running
	W0229 02:30:12.238763  369508 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0229 02:30:12.238784  369508 start.go:709] Will try again in 5 seconds ...
	I0229 02:30:14.148040  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:14.148530  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:14.148561  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:14.148471  370599 retry.go:31] will retry after 430.606986ms: waiting for machine to come up
	I0229 02:30:14.581180  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:14.581649  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:14.581679  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:14.581598  370599 retry.go:31] will retry after 557.726488ms: waiting for machine to come up
	I0229 02:30:15.141289  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:15.141736  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:15.141767  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:15.141675  370599 retry.go:31] will retry after 611.257074ms: waiting for machine to come up
	I0229 02:30:15.754464  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:15.754802  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:15.754831  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:15.754752  370599 retry.go:31] will retry after 905.484801ms: waiting for machine to come up
	I0229 02:30:16.661691  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:16.662072  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:16.662099  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:16.662020  370599 retry.go:31] will retry after 1.007584217s: waiting for machine to come up
	I0229 02:30:17.671565  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:17.672118  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:17.672159  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:17.672048  370599 retry.go:31] will retry after 933.310317ms: waiting for machine to come up
	I0229 02:30:18.607108  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:18.607473  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:18.607496  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:18.607426  370599 retry.go:31] will retry after 1.135856775s: waiting for machine to come up
	I0229 02:30:17.239210  369508 start.go:365] acquiring machines lock for embed-certs-915633: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:30:19.744656  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:19.745017  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:19.745047  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:19.744969  370599 retry.go:31] will retry after 2.184552748s: waiting for machine to come up
	I0229 02:30:21.932313  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:21.932764  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:21.932794  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:21.932711  370599 retry.go:31] will retry after 2.256573009s: waiting for machine to come up
	I0229 02:30:24.191551  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:24.191987  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:24.192016  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:24.191948  370599 retry.go:31] will retry after 3.0850751s: waiting for machine to come up
	I0229 02:30:27.278526  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:27.278941  369591 main.go:141] libmachine: (no-preload-247751) DBG | unable to find current IP address of domain no-preload-247751 in network mk-no-preload-247751
	I0229 02:30:27.278977  369591 main.go:141] libmachine: (no-preload-247751) DBG | I0229 02:30:27.278914  370599 retry.go:31] will retry after 3.196492358s: waiting for machine to come up
	I0229 02:30:31.627482  369869 start.go:369] acquired machines lock for "default-k8s-diff-port-071485" in 4m6.129938439s
	I0229 02:30:31.627553  369869 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:31.627561  369869 fix.go:54] fixHost starting: 
	I0229 02:30:31.628005  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:31.628052  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:31.645217  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39375
	I0229 02:30:31.645607  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:31.646146  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:30:31.646179  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:31.646526  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:31.646754  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:31.646941  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:30:31.648372  369869 fix.go:102] recreateIfNeeded on default-k8s-diff-port-071485: state=Stopped err=<nil>
	I0229 02:30:31.648410  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	W0229 02:30:31.648603  369869 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:31.650778  369869 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-071485" ...
	I0229 02:30:30.479186  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.479664  369591 main.go:141] libmachine: (no-preload-247751) Found IP for machine: 192.168.72.114
	I0229 02:30:30.479694  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has current primary IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.479705  369591 main.go:141] libmachine: (no-preload-247751) Reserving static IP address...
	I0229 02:30:30.480161  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "no-preload-247751", mac: "52:54:00:fa:c1:ec", ip: "192.168.72.114"} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.480199  369591 main.go:141] libmachine: (no-preload-247751) DBG | skip adding static IP to network mk-no-preload-247751 - found existing host DHCP lease matching {name: "no-preload-247751", mac: "52:54:00:fa:c1:ec", ip: "192.168.72.114"}
	I0229 02:30:30.480213  369591 main.go:141] libmachine: (no-preload-247751) Reserved static IP address: 192.168.72.114
	I0229 02:30:30.480233  369591 main.go:141] libmachine: (no-preload-247751) Waiting for SSH to be available...
	I0229 02:30:30.480246  369591 main.go:141] libmachine: (no-preload-247751) DBG | Getting to WaitForSSH function...
	I0229 02:30:30.482557  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.482907  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.482935  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.483110  369591 main.go:141] libmachine: (no-preload-247751) DBG | Using SSH client type: external
	I0229 02:30:30.483136  369591 main.go:141] libmachine: (no-preload-247751) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa (-rw-------)
	I0229 02:30:30.483166  369591 main.go:141] libmachine: (no-preload-247751) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:30:30.483180  369591 main.go:141] libmachine: (no-preload-247751) DBG | About to run SSH command:
	I0229 02:30:30.483197  369591 main.go:141] libmachine: (no-preload-247751) DBG | exit 0
	I0229 02:30:30.610329  369591 main.go:141] libmachine: (no-preload-247751) DBG | SSH cmd err, output: <nil>: 
	I0229 02:30:30.610691  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetConfigRaw
	I0229 02:30:30.611393  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:30.614007  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.614393  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.614426  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.614689  369591 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/config.json ...
	I0229 02:30:30.614872  369591 machine.go:88] provisioning docker machine ...
	I0229 02:30:30.614892  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:30.615096  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.615250  369591 buildroot.go:166] provisioning hostname "no-preload-247751"
	I0229 02:30:30.615272  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.615444  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.617525  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.617800  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.617835  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.617898  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.618095  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.618289  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.618424  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.618564  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:30.618790  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:30.618807  369591 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-247751 && echo "no-preload-247751" | sudo tee /etc/hostname
	I0229 02:30:30.740902  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-247751
	
	I0229 02:30:30.740952  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.743879  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.744353  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.744396  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.744584  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.744843  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.745014  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.745197  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.745351  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:30.745525  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:30.745543  369591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-247751' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-247751/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-247751' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:30:30.867175  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:30.867209  369591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:30:30.867229  369591 buildroot.go:174] setting up certificates
	I0229 02:30:30.867240  369591 provision.go:83] configureAuth start
	I0229 02:30:30.867248  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetMachineName
	I0229 02:30:30.867521  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:30.870143  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.870443  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.870464  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.870678  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.872992  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.873434  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.873463  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.873643  369591 provision.go:138] copyHostCerts
	I0229 02:30:30.873713  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:30:30.873740  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:30:30.873830  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:30:30.873937  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:30:30.873948  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:30:30.873992  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:30:30.874070  369591 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:30:30.874080  369591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:30:30.874110  369591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:30:30.874240  369591 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.no-preload-247751 san=[192.168.72.114 192.168.72.114 localhost 127.0.0.1 minikube no-preload-247751]
	I0229 02:30:30.921711  369591 provision.go:172] copyRemoteCerts
	I0229 02:30:30.921769  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:30:30.921793  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:30.924128  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.924436  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:30.924474  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:30.924628  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:30.924815  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:30.924975  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:30.925073  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.009229  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:30:31.035962  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:30:31.062947  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:30:31.089920  369591 provision.go:86] duration metric: configureAuth took 222.667724ms
	I0229 02:30:31.089947  369591 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:30:31.090145  369591 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:30:31.090256  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.092831  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.093148  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.093192  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.093338  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.093511  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.093699  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.093864  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.094032  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:31.094196  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:31.094211  369591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:30:31.381995  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:30:31.382023  369591 machine.go:91] provisioned docker machine in 767.136363ms
	I0229 02:30:31.382036  369591 start.go:300] post-start starting for "no-preload-247751" (driver="kvm2")
	I0229 02:30:31.382049  369591 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:30:31.382066  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.382560  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:30:31.382596  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.385219  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.385574  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.385602  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.385742  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.385955  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.386091  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.386254  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.469621  369591 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:30:31.474615  369591 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:30:31.474640  369591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:30:31.474702  369591 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:30:31.474772  369591 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:30:31.474867  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:30:31.484964  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:31.512459  369591 start.go:303] post-start completed in 130.406384ms
	I0229 02:30:31.512519  369591 fix.go:56] fixHost completed within 19.27376704s
	I0229 02:30:31.512569  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.515169  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.515568  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.515596  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.515717  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.515944  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.516108  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.516260  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.516417  369591 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:31.516592  369591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.114 22 <nil> <nil>}
	I0229 02:30:31.516605  369591 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:30:31.627335  369591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173831.594794890
	
	I0229 02:30:31.627357  369591 fix.go:206] guest clock: 1709173831.594794890
	I0229 02:30:31.627366  369591 fix.go:219] Guest: 2024-02-29 02:30:31.59479489 +0000 UTC Remote: 2024-02-29 02:30:31.512545974 +0000 UTC m=+292.733991044 (delta=82.248916ms)
	I0229 02:30:31.627395  369591 fix.go:190] guest clock delta is within tolerance: 82.248916ms
	I0229 02:30:31.627403  369591 start.go:83] releasing machines lock for "no-preload-247751", held for 19.38873796s
	I0229 02:30:31.627429  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.627713  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:31.630486  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.630930  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.630959  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.631131  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631640  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631830  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:30:31.631920  369591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:30:31.631983  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.632122  369591 ssh_runner.go:195] Run: cat /version.json
	I0229 02:30:31.632160  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:30:31.634658  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.634874  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635050  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.635079  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635348  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.635354  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:31.635379  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:31.635478  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:30:31.635566  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.635633  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:30:31.635758  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.635768  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:30:31.635934  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.635940  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:30:31.719735  369591 ssh_runner.go:195] Run: systemctl --version
	I0229 02:30:31.739831  369591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:30:31.891138  369591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:30:31.899497  369591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:30:31.899569  369591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:30:31.921755  369591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:30:31.921785  369591 start.go:475] detecting cgroup driver to use...
	I0229 02:30:31.921896  369591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:30:31.938157  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:30:31.952761  369591 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:30:31.952834  369591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:30:31.966785  369591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:30:31.980931  369591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:30:32.091879  369591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:30:32.261190  369591 docker.go:233] disabling docker service ...
	I0229 02:30:32.261272  369591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:30:32.278862  369591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:30:32.295382  369591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:30:32.433426  369591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:30:32.557975  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:30:32.573791  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:30:32.595797  369591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:30:32.595848  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.608978  369591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:30:32.609042  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.621681  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.634251  369591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:32.647107  369591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:30:32.660478  369591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:30:32.672596  369591 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:30:32.672662  369591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:30:32.688480  369591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:30:32.700769  369591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:30:32.823703  369591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:30:33.004444  369591 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:30:33.004531  369591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:30:33.010801  369591 start.go:543] Will wait 60s for crictl version
	I0229 02:30:33.010862  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.015224  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:30:33.064627  369591 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:30:33.064721  369591 ssh_runner.go:195] Run: crio --version
	I0229 02:30:33.108265  369591 ssh_runner.go:195] Run: crio --version
	I0229 02:30:33.142639  369591 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 02:30:33.144169  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetIP
	I0229 02:30:33.147250  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:33.147609  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:30:33.147644  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:30:33.147836  369591 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 02:30:33.153138  369591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:33.169427  369591 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:30:33.169481  369591 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:33.214079  369591 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 02:30:33.214113  369591 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:30:33.214193  369591 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:33.214216  369591 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.214252  369591 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.214276  369591 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.214335  369591 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.214323  369591 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.214354  369591 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0229 02:30:33.214241  369591 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.215862  369591 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.215880  369591 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0229 02:30:33.215862  369591 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.215928  369591 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.215947  369591 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:33.216082  369591 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.216136  369591 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.216252  369591 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.348095  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0229 02:30:33.434211  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.496911  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.499249  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.503235  369591 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0229 02:30:33.503274  369591 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.503307  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.507506  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.548265  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.551287  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.589427  369591 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0229 02:30:33.589474  369591 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.589523  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.590660  369591 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0229 02:30:33.590688  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0229 02:30:33.590708  369591 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.590763  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.636886  369591 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0229 02:30:33.636934  369591 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.637001  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.664221  369591 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0229 02:30:33.664266  369591 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.664316  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.691890  369591 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0229 02:30:33.691945  369591 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.691978  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0229 02:30:33.691993  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:33.692003  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.692096  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0229 02:30:33.692107  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0229 02:30:33.692104  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.692165  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0229 02:30:33.793616  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:33.793708  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0229 02:30:33.793723  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:33.793772  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:33.793839  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0229 02:30:33.793853  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:33.793856  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0229 02:30:33.793884  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0229 02:30:33.793902  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:33.793910  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:33.793914  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:33.793936  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0229 02:30:31.652037  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Start
	I0229 02:30:31.652202  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring networks are active...
	I0229 02:30:31.652984  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring network default is active
	I0229 02:30:31.653457  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Ensuring network mk-default-k8s-diff-port-071485 is active
	I0229 02:30:31.653909  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Getting domain xml...
	I0229 02:30:31.654724  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Creating domain...
	I0229 02:30:32.911561  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting to get IP...
	I0229 02:30:32.912505  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:32.912932  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:32.913032  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:32.912928  370716 retry.go:31] will retry after 285.213813ms: waiting for machine to come up
	I0229 02:30:33.199327  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.199733  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.199764  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.199678  370716 retry.go:31] will retry after 334.890426ms: waiting for machine to come up
	I0229 02:30:33.536492  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.536976  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.537006  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.536924  370716 retry.go:31] will retry after 344.946846ms: waiting for machine to come up
	I0229 02:30:33.883432  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.883911  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:33.883941  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:33.883858  370716 retry.go:31] will retry after 516.135135ms: waiting for machine to come up
	I0229 02:30:34.401167  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.401592  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.401621  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:34.401543  370716 retry.go:31] will retry after 538.013174ms: waiting for machine to come up
	I0229 02:30:34.941529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.942080  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:34.942116  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:34.942039  370716 retry.go:31] will retry after 883.013858ms: waiting for machine to come up
	I0229 02:30:33.850786  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0229 02:30:33.850868  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0229 02:30:33.850977  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:34.154343  369591 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:36.987957  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (3.194013383s)
	I0229 02:30:36.987999  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0229 02:30:36.988100  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.194139784s)
	I0229 02:30:36.988127  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0229 02:30:36.988148  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (3.194207246s)
	I0229 02:30:36.988178  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0229 02:30:36.988156  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:36.988191  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (3.194323563s)
	I0229 02:30:36.988206  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0229 02:30:36.988236  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0229 02:30:36.988269  369591 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.833890629s)
	I0229 02:30:36.988240  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (3.13724749s)
	I0229 02:30:36.988310  369591 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0229 02:30:36.988331  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0229 02:30:36.988343  369591 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:36.988375  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:30:36.993483  369591 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:30:38.351556  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.363290185s)
	I0229 02:30:38.351599  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0229 02:30:38.351633  369591 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:38.351632  369591 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.358113254s)
	I0229 02:30:38.351686  369591 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0229 02:30:38.351705  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0229 02:30:38.351782  369591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:35.827402  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:35.827906  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:35.827932  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:35.827872  370716 retry.go:31] will retry after 902.653821ms: waiting for machine to come up
	I0229 02:30:36.732470  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:36.732925  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:36.732957  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:36.732863  370716 retry.go:31] will retry after 1.322376383s: waiting for machine to come up
	I0229 02:30:38.057306  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:38.057842  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:38.057874  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:38.057790  370716 retry.go:31] will retry after 1.16249498s: waiting for machine to come up
	I0229 02:30:39.221714  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:39.222197  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:39.222236  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:39.222156  370716 retry.go:31] will retry after 1.912383064s: waiting for machine to come up
	I0229 02:30:42.350149  369591 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.998331984s)
	I0229 02:30:42.350198  369591 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0229 02:30:42.350214  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.99848453s)
	I0229 02:30:42.350266  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0229 02:30:42.350305  369591 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:42.350357  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0229 02:30:41.135736  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:41.136113  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:41.136144  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:41.136058  370716 retry.go:31] will retry after 2.823296742s: waiting for machine to come up
	I0229 02:30:43.960885  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:43.961677  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:43.961703  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:43.961582  370716 retry.go:31] will retry after 3.266272258s: waiting for machine to come up
	I0229 02:30:44.528869  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.178478896s)
	I0229 02:30:44.528915  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0229 02:30:44.528947  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:44.529014  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0229 02:30:46.991074  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.462030604s)
	I0229 02:30:46.991103  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0229 02:30:46.991129  369591 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:46.991195  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0229 02:30:47.229005  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:47.229478  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | unable to find current IP address of domain default-k8s-diff-port-071485 in network mk-default-k8s-diff-port-071485
	I0229 02:30:47.229511  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | I0229 02:30:47.229417  370716 retry.go:31] will retry after 3.429712893s: waiting for machine to come up
	I0229 02:30:51.887858  370051 start.go:369] acquired machines lock for "old-k8s-version-275488" in 4m15.644916266s
	I0229 02:30:51.887935  370051 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:30:51.887944  370051 fix.go:54] fixHost starting: 
	I0229 02:30:51.888374  370051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:30:51.888428  370051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:30:51.905851  370051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36903
	I0229 02:30:51.906292  370051 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:30:51.906778  370051 main.go:141] libmachine: Using API Version  1
	I0229 02:30:51.906806  370051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:30:51.907250  370051 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:30:51.907459  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:30:51.907631  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetState
	I0229 02:30:51.909061  370051 fix.go:102] recreateIfNeeded on old-k8s-version-275488: state=Stopped err=<nil>
	I0229 02:30:51.909093  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	W0229 02:30:51.909251  370051 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:30:51.911318  370051 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-275488" ...
	I0229 02:30:50.662939  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.663341  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Found IP for machine: 192.168.61.233
	I0229 02:30:50.663366  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Reserving static IP address...
	I0229 02:30:50.663404  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has current primary IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.663745  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-071485", mac: "52:54:00:81:f9:08", ip: "192.168.61.233"} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.663781  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Reserved static IP address: 192.168.61.233
	I0229 02:30:50.663804  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | skip adding static IP to network mk-default-k8s-diff-port-071485 - found existing host DHCP lease matching {name: "default-k8s-diff-port-071485", mac: "52:54:00:81:f9:08", ip: "192.168.61.233"}
	I0229 02:30:50.663819  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Waiting for SSH to be available...
	I0229 02:30:50.663830  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Getting to WaitForSSH function...
	I0229 02:30:50.665924  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.666270  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.666306  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.666411  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Using SSH client type: external
	I0229 02:30:50.666435  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa (-rw-------)
	I0229 02:30:50.666464  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:30:50.666477  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | About to run SSH command:
	I0229 02:30:50.666489  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | exit 0
	I0229 02:30:50.794598  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | SSH cmd err, output: <nil>: 
	I0229 02:30:50.795011  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetConfigRaw
	I0229 02:30:50.795753  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:50.798443  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.798796  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.798822  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.799151  369869 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/config.json ...
	I0229 02:30:50.799410  369869 machine.go:88] provisioning docker machine ...
	I0229 02:30:50.799440  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:50.799684  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:50.799937  369869 buildroot.go:166] provisioning hostname "default-k8s-diff-port-071485"
	I0229 02:30:50.799963  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:50.800129  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:50.802457  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.802786  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.802813  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.802923  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:50.803087  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.803281  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.803393  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:50.803527  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:50.803744  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:50.803757  369869 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-071485 && echo "default-k8s-diff-port-071485" | sudo tee /etc/hostname
	I0229 02:30:50.930812  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-071485
	
	I0229 02:30:50.930849  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:50.933650  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.934017  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:50.934057  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:50.934217  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:50.934458  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.934651  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:50.934813  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:50.934964  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:50.935141  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:50.935159  369869 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-071485' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-071485/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-071485' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:30:51.057233  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:30:51.057266  369869 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:30:51.057307  369869 buildroot.go:174] setting up certificates
	I0229 02:30:51.057321  369869 provision.go:83] configureAuth start
	I0229 02:30:51.057335  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetMachineName
	I0229 02:30:51.057615  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:51.060233  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.060563  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.060595  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.060707  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.062583  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.062889  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.062938  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.063065  369869 provision.go:138] copyHostCerts
	I0229 02:30:51.063121  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:30:51.063140  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:30:51.063193  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:30:51.063290  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:30:51.063304  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:30:51.063332  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:30:51.063396  369869 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:30:51.063403  369869 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:30:51.063420  369869 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:30:51.063482  369869 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-071485 san=[192.168.61.233 192.168.61.233 localhost 127.0.0.1 minikube default-k8s-diff-port-071485]
	I0229 02:30:51.180356  369869 provision.go:172] copyRemoteCerts
	I0229 02:30:51.180417  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:30:51.180446  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.182981  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.183262  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.183295  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.183465  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.183656  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.183814  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.183958  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.270548  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:30:51.297136  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 02:30:51.323133  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:30:51.349241  369869 provision.go:86] duration metric: configureAuth took 291.905825ms
	I0229 02:30:51.349269  369869 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:30:51.349453  369869 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:30:51.349529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.352119  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.352473  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.352503  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.352658  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.352839  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.353009  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.353122  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.353304  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:51.353480  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:51.353495  369869 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:30:51.639987  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:30:51.640022  369869 machine.go:91] provisioned docker machine in 840.591751ms
	I0229 02:30:51.640041  369869 start.go:300] post-start starting for "default-k8s-diff-port-071485" (driver="kvm2")
	I0229 02:30:51.640057  369869 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:30:51.640087  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.640450  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:30:51.640486  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.643118  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.643427  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.643464  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.643661  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.643871  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.644025  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.644164  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.730150  369869 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:30:51.735109  369869 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:30:51.735135  369869 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:30:51.735207  369869 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:30:51.735298  369869 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:30:51.735416  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:30:51.745416  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:51.771727  369869 start.go:303] post-start completed in 131.66845ms
	I0229 02:30:51.771756  369869 fix.go:56] fixHost completed within 20.144195498s
	I0229 02:30:51.771782  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.774300  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.774582  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.774610  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.774744  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.774972  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.775153  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.775295  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.775481  369869 main.go:141] libmachine: Using SSH client type: native
	I0229 02:30:51.775648  369869 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0229 02:30:51.775659  369869 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:30:51.887656  369869 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173851.865903243
	
	I0229 02:30:51.887683  369869 fix.go:206] guest clock: 1709173851.865903243
	I0229 02:30:51.887691  369869 fix.go:219] Guest: 2024-02-29 02:30:51.865903243 +0000 UTC Remote: 2024-02-29 02:30:51.771760886 +0000 UTC m=+266.432013426 (delta=94.142357ms)
	I0229 02:30:51.887738  369869 fix.go:190] guest clock delta is within tolerance: 94.142357ms
	I0229 02:30:51.887744  369869 start.go:83] releasing machines lock for "default-k8s-diff-port-071485", held for 20.260217484s
	I0229 02:30:51.887771  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.888047  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:51.890930  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.891264  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.891294  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.891491  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892002  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892209  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:30:51.892299  369869 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:30:51.892370  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.892472  369869 ssh_runner.go:195] Run: cat /version.json
	I0229 02:30:51.892503  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:30:51.895178  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895415  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895591  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.895626  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895769  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:51.895800  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:51.895820  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.895966  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:30:51.896055  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.896141  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:30:51.896212  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.896277  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:30:51.896367  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.896447  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:30:51.976085  369869 ssh_runner.go:195] Run: systemctl --version
	I0229 02:30:52.001946  369869 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:30:52.156753  369869 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:30:52.164196  369869 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:30:52.164302  369869 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:30:52.189176  369869 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:30:52.189201  369869 start.go:475] detecting cgroup driver to use...
	I0229 02:30:52.189281  369869 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:30:52.207647  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:30:52.223752  369869 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:30:52.223842  369869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:30:52.246026  369869 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:30:52.262180  369869 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:30:52.409077  369869 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:30:52.583777  369869 docker.go:233] disabling docker service ...
	I0229 02:30:52.583850  369869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:30:52.601434  369869 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:30:52.617382  369869 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:30:52.757258  369869 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:30:52.898036  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:30:52.915787  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:30:52.939344  369869 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:30:52.939417  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.951659  369869 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:30:52.951722  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.963072  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.974800  369869 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:30:52.986490  369869 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:30:52.998630  369869 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:30:53.009783  369869 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:30:53.009862  369869 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:30:53.026356  369869 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:30:53.038720  369869 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:30:53.171220  369869 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:30:53.326032  369869 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:30:53.326102  369869 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:30:53.332369  369869 start.go:543] Will wait 60s for crictl version
	I0229 02:30:53.332431  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:30:53.336784  369869 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:30:53.378780  369869 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:30:53.378902  369869 ssh_runner.go:195] Run: crio --version
	I0229 02:30:53.411158  369869 ssh_runner.go:195] Run: crio --version
	I0229 02:30:53.447038  369869 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 02:30:49.053324  369591 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.062103665s)
	I0229 02:30:49.053353  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0229 02:30:49.053378  369591 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:49.053426  369591 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0229 02:30:49.910791  369591 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0229 02:30:49.910854  369591 cache_images.go:123] Successfully loaded all cached images
	I0229 02:30:49.910862  369591 cache_images.go:92] LoadImages completed in 16.696734078s
	I0229 02:30:49.910994  369591 ssh_runner.go:195] Run: crio config
	I0229 02:30:49.961413  369591 cni.go:84] Creating CNI manager for ""
	I0229 02:30:49.961435  369591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:30:49.961456  369591 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:30:49.961509  369591 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.114 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-247751 NodeName:no-preload-247751 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:30:49.961701  369591 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-247751"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:30:49.961801  369591 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-247751 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-247751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:30:49.961866  369591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 02:30:49.973105  369591 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:30:49.973170  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:30:49.983178  369591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0229 02:30:50.001511  369591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 02:30:50.019574  369591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0229 02:30:50.037993  369591 ssh_runner.go:195] Run: grep 192.168.72.114	control-plane.minikube.internal$ /etc/hosts
	I0229 02:30:50.042075  369591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:50.054761  369591 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751 for IP: 192.168.72.114
	I0229 02:30:50.054796  369591 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:30:50.054976  369591 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:30:50.055031  369591 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:30:50.055146  369591 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/client.key
	I0229 02:30:50.055243  369591 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.key.9adeb8c5
	I0229 02:30:50.055310  369591 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.key
	I0229 02:30:50.055440  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:30:50.055481  369591 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:30:50.055502  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:30:50.055542  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:30:50.055577  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:30:50.055658  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:30:50.055728  369591 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:50.056454  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:30:50.083764  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:30:50.110733  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:30:50.139180  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/no-preload-247751/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:30:50.167000  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:30:50.194044  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:30:50.220671  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:30:50.247561  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:30:50.274577  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:30:50.300997  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:30:50.327718  369591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:30:50.355463  369591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:30:50.374921  369591 ssh_runner.go:195] Run: openssl version
	I0229 02:30:50.381614  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:30:50.393546  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.398532  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.398594  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:30:50.404719  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:30:50.416507  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:30:50.428072  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.433031  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.433106  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:50.439174  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:30:50.450778  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:30:50.462238  369591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.467219  369591 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.467269  369591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:30:50.473395  369591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:30:50.484817  369591 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:30:50.489643  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:30:50.496274  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:30:50.502579  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:30:50.508665  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:30:50.514827  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:30:50.520958  369591 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:30:50.527032  369591 kubeadm.go:404] StartCluster: {Name:no-preload-247751 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-247751 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:30:50.527147  369591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:30:50.527194  369591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:30:50.565834  369591 cri.go:89] found id: ""
	I0229 02:30:50.565931  369591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:30:50.577305  369591 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:30:50.577354  369591 kubeadm.go:636] restartCluster start
	I0229 02:30:50.577408  369591 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:30:50.587881  369591 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:50.588896  369591 kubeconfig.go:92] found "no-preload-247751" server: "https://192.168.72.114:8443"
	I0229 02:30:50.591223  369591 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:30:50.601374  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:50.601434  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:50.613730  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:51.102422  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:51.102539  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:51.116483  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:51.601564  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:51.601657  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:51.615481  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:52.102039  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:52.102123  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:52.121300  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:52.601999  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:52.602093  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:52.618701  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.102291  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:53.102403  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:53.117898  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.602410  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:53.602496  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:53.618760  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:53.448437  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetIP
	I0229 02:30:53.451649  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:53.451998  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:30:53.452052  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:30:53.452302  369869 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 02:30:53.458709  369869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:53.477744  369869 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:30:53.477831  369869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:53.527511  369869 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 02:30:53.527593  369869 ssh_runner.go:195] Run: which lz4
	I0229 02:30:53.532370  369869 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:30:53.537149  369869 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:30:53.537179  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 02:30:51.912520  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .Start
	I0229 02:30:51.912688  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring networks are active...
	I0229 02:30:51.913511  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network default is active
	I0229 02:30:51.913929  370051 main.go:141] libmachine: (old-k8s-version-275488) Ensuring network mk-old-k8s-version-275488 is active
	I0229 02:30:51.914378  370051 main.go:141] libmachine: (old-k8s-version-275488) Getting domain xml...
	I0229 02:30:51.915191  370051 main.go:141] libmachine: (old-k8s-version-275488) Creating domain...
	I0229 02:30:53.179261  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting to get IP...
	I0229 02:30:53.180359  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.180800  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.180922  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.180789  370858 retry.go:31] will retry after 282.360524ms: waiting for machine to come up
	I0229 02:30:53.465135  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.465708  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.465742  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.465651  370858 retry.go:31] will retry after 341.876004ms: waiting for machine to come up
	I0229 02:30:53.809322  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:53.809734  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:53.809876  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:53.809797  370858 retry.go:31] will retry after 356.208548ms: waiting for machine to come up
	I0229 02:30:54.167329  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.167824  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.167852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.167760  370858 retry.go:31] will retry after 395.76503ms: waiting for machine to come up
	I0229 02:30:54.565496  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:54.565976  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:54.566004  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:54.565933  370858 retry.go:31] will retry after 617.898012ms: waiting for machine to come up
	I0229 02:30:55.185679  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:55.186193  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:55.186237  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:55.186144  370858 retry.go:31] will retry after 911.947678ms: waiting for machine to come up
	I0229 02:30:56.099334  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:56.099788  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:56.099815  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:56.099726  370858 retry.go:31] will retry after 1.132066509s: waiting for machine to come up
	I0229 02:30:54.102304  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:54.102485  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:54.123193  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:54.601763  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:54.601890  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:54.621846  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.102417  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:55.102503  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:55.129010  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.601478  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:55.601532  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:55.620169  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:56.101701  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:56.101776  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:56.121369  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:56.601447  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:56.601550  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:56.617079  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.101509  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:57.101648  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:57.121691  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.601658  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:57.601754  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:57.620357  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:58.101829  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:58.101921  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:58.115818  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:58.602403  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:58.602509  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:58.621857  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:55.599398  369869 crio.go:444] Took 2.067052 seconds to copy over tarball
	I0229 02:30:55.599501  369869 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:30:58.543850  369869 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.944309258s)
	I0229 02:30:58.543884  369869 crio.go:451] Took 2.944447 seconds to extract the tarball
	I0229 02:30:58.543896  369869 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:30:58.592492  369869 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:30:58.751479  369869 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:30:58.751509  369869 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:30:58.751576  369869 ssh_runner.go:195] Run: crio config
	I0229 02:30:58.813487  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:30:58.813515  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:30:58.813540  369869 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:30:58.813566  369869 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.233 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-071485 NodeName:default-k8s-diff-port-071485 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:30:58.813785  369869 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.233
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-071485"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:30:58.813898  369869 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-071485 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-071485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0229 02:30:58.813971  369869 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:30:58.826199  369869 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:30:58.826324  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:30:58.837384  369869 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0229 02:30:58.856023  369869 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:30:58.876432  369869 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0229 02:30:58.900684  369869 ssh_runner.go:195] Run: grep 192.168.61.233	control-plane.minikube.internal$ /etc/hosts
	I0229 02:30:58.905249  369869 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:30:58.920007  369869 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485 for IP: 192.168.61.233
	I0229 02:30:58.920046  369869 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:30:58.920249  369869 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:30:58.920319  369869 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:30:58.920432  369869 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/client.key
	I0229 02:30:58.995037  369869 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.key.b3fc8ab0
	I0229 02:30:58.995173  369869 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.key
	I0229 02:30:58.995377  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:30:58.995430  369869 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:30:58.995451  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:30:58.995503  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:30:58.995543  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:30:58.995590  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:30:58.995653  369869 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:30:58.996607  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:30:59.026487  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:30:59.054725  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:30:59.082553  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/default-k8s-diff-port-071485/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:30:59.110374  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:30:59.141972  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:30:59.170097  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:30:59.201206  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:30:59.232790  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:30:59.263940  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:30:59.292401  369869 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:30:59.321920  369869 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:30:59.343921  369869 ssh_runner.go:195] Run: openssl version
	I0229 02:30:59.351308  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:30:59.364059  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.369212  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.369302  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:30:59.375683  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:30:59.389046  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:30:59.404101  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.409433  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.409491  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:30:59.416126  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:30:59.429674  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:30:59.443405  369869 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.448931  369869 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.448991  369869 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:30:59.455800  369869 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:30:59.469013  369869 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:30:59.474745  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:30:59.481689  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:30:59.488868  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:30:59.496380  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:30:59.503593  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:30:59.510485  369869 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:30:59.517770  369869 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-071485 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-071485 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.233 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:30:59.517894  369869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:30:59.517941  369869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:30:59.564631  369869 cri.go:89] found id: ""
	I0229 02:30:59.564718  369869 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:30:59.578812  369869 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:30:59.578881  369869 kubeadm.go:636] restartCluster start
	I0229 02:30:59.578954  369869 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:30:59.592900  369869 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:59.593909  369869 kubeconfig.go:92] found "default-k8s-diff-port-071485" server: "https://192.168.61.233:8444"
	I0229 02:30:59.596083  369869 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:30:59.609384  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.609466  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.625617  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.110139  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.110282  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.127301  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:57.233610  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:57.234113  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:57.234145  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:57.234063  370858 retry.go:31] will retry after 1.238348525s: waiting for machine to come up
	I0229 02:30:58.474146  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:58.474696  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:58.474733  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:58.474642  370858 retry.go:31] will retry after 1.373712981s: waiting for machine to come up
	I0229 02:30:59.850075  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:30:59.850504  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:30:59.850526  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:30:59.850460  370858 retry.go:31] will retry after 2.156069813s: waiting for machine to come up
	I0229 02:30:59.101727  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.101812  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.120465  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:30:59.602060  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:30:59.602155  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:30:59.620588  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.102108  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.102203  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.120822  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.602443  369591 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.602545  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.616796  369591 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:00.616835  369591 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:00.616858  369591 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:00.616873  369591 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:00.616940  369591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:00.661747  369591 cri.go:89] found id: ""
	I0229 02:31:00.661869  369591 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:00.684098  369591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:00.696989  369591 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:00.697059  369591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:00.708553  369591 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:00.708583  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:00.827929  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.578572  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.818119  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.892891  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:01.964926  369591 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:01.965037  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:02.466098  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:02.965290  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:03.465897  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:03.483060  369591 api_server.go:72] duration metric: took 1.518135432s to wait for apiserver process to appear ...
	I0229 02:31:03.483103  369591 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:03.483127  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:00.610179  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:00.610299  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:00.630460  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:01.109543  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:01.109680  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:01.129578  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:01.610203  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:01.610301  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:01.630078  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.109835  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:02.109945  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:02.127400  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.610160  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:02.610269  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:02.630581  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:03.109702  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:03.109836  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:03.129754  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:03.610303  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:03.610389  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:03.629702  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:04.110325  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:04.110459  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:04.128740  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:04.610305  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:04.610403  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:04.624716  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:05.110349  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:05.110457  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:05.130070  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:02.007911  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:02.008381  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:02.008409  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:02.008330  370858 retry.go:31] will retry after 1.864134048s: waiting for machine to come up
	I0229 02:31:03.873997  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:03.874606  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:03.874653  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:03.874547  370858 retry.go:31] will retry after 2.45659808s: waiting for machine to come up
	I0229 02:31:06.111554  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:06.111581  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:06.111596  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.191055  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:06.191090  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:06.483401  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.489220  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:06.489254  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:06.983921  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:06.988354  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:06.988430  369591 api_server.go:103] status: https://192.168.72.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:07.483305  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:31:07.489830  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0229 02:31:07.497146  369591 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:31:07.497187  369591 api_server.go:131] duration metric: took 4.014075718s to wait for apiserver health ...
	I0229 02:31:07.497201  369591 cni.go:84] Creating CNI manager for ""
	I0229 02:31:07.497210  369591 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:07.498785  369591 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:07.500032  369591 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:31:07.530625  369591 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:31:07.594249  369591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:31:07.604940  369591 system_pods.go:59] 8 kube-system pods found
	I0229 02:31:07.604973  369591 system_pods.go:61] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:31:07.604980  369591 system_pods.go:61] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:31:07.604989  369591 system_pods.go:61] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:31:07.604995  369591 system_pods.go:61] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:31:07.605003  369591 system_pods.go:61] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:31:07.605015  369591 system_pods.go:61] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:31:07.605022  369591 system_pods.go:61] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:31:07.605032  369591 system_pods.go:61] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:31:07.605052  369591 system_pods.go:74] duration metric: took 10.776743ms to wait for pod list to return data ...
	I0229 02:31:07.605061  369591 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:31:07.608034  369591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:31:07.608059  369591 node_conditions.go:123] node cpu capacity is 2
	I0229 02:31:07.608073  369591 node_conditions.go:105] duration metric: took 3.004467ms to run NodePressure ...
	I0229 02:31:07.608096  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:07.975871  369591 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:31:07.980949  369591 kubeadm.go:787] kubelet initialised
	I0229 02:31:07.980970  369591 kubeadm.go:788] duration metric: took 5.071971ms waiting for restarted kubelet to initialise ...
	I0229 02:31:07.980979  369591 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:07.986764  369591 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:07.992673  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "coredns-76f75df574-2z5w8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.992698  369591 pod_ready.go:81] duration metric: took 5.911106ms waiting for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:07.992707  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "coredns-76f75df574-2z5w8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.992717  369591 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:07.997300  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "etcd-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.997322  369591 pod_ready.go:81] duration metric: took 4.594827ms waiting for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:07.997330  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "etcd-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:07.997335  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.004032  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-apiserver-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.004052  369591 pod_ready.go:81] duration metric: took 6.71117ms waiting for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.004060  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-apiserver-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.004066  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.009947  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.009985  369591 pod_ready.go:81] duration metric: took 5.909502ms waiting for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.010001  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.010009  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.398938  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-proxy-cdc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.398965  369591 pod_ready.go:81] duration metric: took 388.944943ms waiting for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.398975  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-proxy-cdc4l" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.398982  369591 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:08.797706  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "kube-scheduler-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.797733  369591 pod_ready.go:81] duration metric: took 398.745142ms waiting for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:08.797744  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "kube-scheduler-no-preload-247751" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:08.797751  369591 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:09.198467  369591 pod_ready.go:97] node "no-preload-247751" hosting pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:09.198496  369591 pod_ready.go:81] duration metric: took 400.737315ms waiting for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:09.198506  369591 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-247751" hosting pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:09.198511  369591 pod_ready.go:38] duration metric: took 1.217523271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:09.198530  369591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:31:09.211194  369591 ops.go:34] apiserver oom_adj: -16
	I0229 02:31:09.211222  369591 kubeadm.go:640] restartCluster took 18.633858862s
	I0229 02:31:09.211232  369591 kubeadm.go:406] StartCluster complete in 18.684207766s
	I0229 02:31:09.211263  369591 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:09.211346  369591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:31:09.212899  369591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:09.213213  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:31:09.213318  369591 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:31:09.213406  369591 addons.go:69] Setting storage-provisioner=true in profile "no-preload-247751"
	I0229 02:31:09.213426  369591 addons.go:69] Setting default-storageclass=true in profile "no-preload-247751"
	I0229 02:31:09.213446  369591 addons.go:69] Setting metrics-server=true in profile "no-preload-247751"
	I0229 02:31:09.213463  369591 addons.go:234] Setting addon metrics-server=true in "no-preload-247751"
	I0229 02:31:09.213465  369591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-247751"
	I0229 02:31:09.213463  369591 config.go:182] Loaded profile config "no-preload-247751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	W0229 02:31:09.213472  369591 addons.go:243] addon metrics-server should already be in state true
	I0229 02:31:09.213435  369591 addons.go:234] Setting addon storage-provisioner=true in "no-preload-247751"
	W0229 02:31:09.213515  369591 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:31:09.213519  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.213541  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.213915  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213924  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213944  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.213944  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.213943  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.213978  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.218976  369591 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-247751" context rescaled to 1 replicas
	I0229 02:31:09.219015  369591 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.114 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:31:09.220657  369591 out.go:177] * Verifying Kubernetes components...
	I0229 02:31:09.221954  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:31:09.230064  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0229 02:31:09.230528  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.231030  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.231053  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.231526  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.231762  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.233032  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0229 02:31:09.233487  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.233929  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42835
	I0229 02:31:09.234003  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.234028  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.234293  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.234406  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.234784  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.234811  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.235009  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.235068  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.235163  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.235631  369591 addons.go:234] Setting addon default-storageclass=true in "no-preload-247751"
	W0229 02:31:09.235651  369591 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:31:09.235679  369591 host.go:66] Checking if "no-preload-247751" exists ...
	I0229 02:31:09.235738  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.235772  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.236123  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.236157  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.250756  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I0229 02:31:09.251190  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.251830  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.251855  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.252228  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.252403  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.254210  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.256240  369591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:09.257522  369591 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:31:09.257537  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:31:09.257552  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.255418  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0229 02:31:09.255485  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0229 02:31:09.258003  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.258129  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.258432  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.258457  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.258664  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.258676  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.258822  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.258983  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.259278  369591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:09.259313  369591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:09.259533  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.261295  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.261320  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.262706  369591 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:31:05.610163  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:05.610319  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:05.627782  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:06.110424  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:06.110521  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:06.129628  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:06.610193  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:06.610330  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:06.624176  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:07.110249  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:07.110354  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:07.129955  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:07.609462  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:07.609536  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:07.623687  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:08.110263  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:08.110407  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:08.126900  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:08.610447  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:08.610520  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:08.625182  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.109675  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:09.109759  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:09.124637  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.610399  369869 api_server.go:166] Checking apiserver status ...
	I0229 02:31:09.610520  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:09.630681  369869 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:09.630715  369869 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:09.630757  369869 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:09.630777  369869 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:09.630844  369869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:09.683876  369869 cri.go:89] found id: ""
	I0229 02:31:09.683963  369869 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:09.706059  369869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:09.719868  369869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:09.719939  369869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:09.734591  369869 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:09.734622  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:09.862689  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:09.263808  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:31:09.263830  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:31:09.263849  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.261760  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.261947  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.263890  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.264339  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.264522  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.264704  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.266885  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.267339  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.267358  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.267533  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.267649  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.267782  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.267862  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.302813  369591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38007
	I0229 02:31:09.303329  369591 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:09.303878  369591 main.go:141] libmachine: Using API Version  1
	I0229 02:31:09.303909  369591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:09.304305  369591 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:09.304509  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetState
	I0229 02:31:09.306147  369591 main.go:141] libmachine: (no-preload-247751) Calling .DriverName
	I0229 02:31:09.306434  369591 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:31:09.306454  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:31:09.306472  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHHostname
	I0229 02:31:09.309029  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.309345  369591 main.go:141] libmachine: (no-preload-247751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fa:c1:ec", ip: ""} in network mk-no-preload-247751: {Iface:virbr4 ExpiryTime:2024-02-29 03:30:24 +0000 UTC Type:0 Mac:52:54:00:fa:c1:ec Iaid: IPaddr:192.168.72.114 Prefix:24 Hostname:no-preload-247751 Clientid:01:52:54:00:fa:c1:ec}
	I0229 02:31:09.309382  369591 main.go:141] libmachine: (no-preload-247751) DBG | domain no-preload-247751 has defined IP address 192.168.72.114 and MAC address 52:54:00:fa:c1:ec in network mk-no-preload-247751
	I0229 02:31:09.309670  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHPort
	I0229 02:31:09.309872  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHKeyPath
	I0229 02:31:09.310048  369591 main.go:141] libmachine: (no-preload-247751) Calling .GetSSHUsername
	I0229 02:31:09.310193  369591 sshutil.go:53] new ssh client: &{IP:192.168.72.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/no-preload-247751/id_rsa Username:docker}
	I0229 02:31:09.402579  369591 node_ready.go:35] waiting up to 6m0s for node "no-preload-247751" to be "Ready" ...
	I0229 02:31:09.402756  369591 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:31:09.420259  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:31:09.426629  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:31:09.426655  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:31:09.446028  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:31:09.457219  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:31:09.457244  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:31:09.504028  369591 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:31:09.504054  369591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:31:09.554137  369591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:31:10.485560  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.039492326s)
	I0229 02:31:10.485633  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.485646  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.485928  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.065634917s)
	I0229 02:31:10.485970  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.485986  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.486053  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.486072  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.486092  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.486104  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.486112  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.486254  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.486287  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.486304  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.486320  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.487538  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.487556  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.487566  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.487543  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.487582  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.487579  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.494355  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.494374  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.494614  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.494635  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.494633  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.559201  369591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.005004802s)
	I0229 02:31:10.559258  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.559276  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.559592  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.559614  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.559625  369591 main.go:141] libmachine: Making call to close driver server
	I0229 02:31:10.559633  369591 main.go:141] libmachine: (no-preload-247751) Calling .Close
	I0229 02:31:10.559899  369591 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:31:10.559915  369591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:31:10.559926  369591 addons.go:470] Verifying addon metrics-server=true in "no-preload-247751"
	I0229 02:31:10.561833  369591 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:31:06.333259  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:06.333776  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:06.333811  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:06.333733  370858 retry.go:31] will retry after 3.223893936s: waiting for machine to come up
	I0229 02:31:09.559349  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:09.559937  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | unable to find current IP address of domain old-k8s-version-275488 in network mk-old-k8s-version-275488
	I0229 02:31:09.559968  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | I0229 02:31:09.559891  370858 retry.go:31] will retry after 5.278822831s: waiting for machine to come up
	I0229 02:31:10.560171  369591 main.go:141] libmachine: (no-preload-247751) DBG | Closing plugin on server side
	I0229 02:31:10.563240  369591 addons.go:505] enable addons completed in 1.349905679s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:31:11.408006  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:10.805438  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.016546  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.132323  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:11.212201  369869 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:11.212309  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:11.713366  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.212866  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.713327  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:12.732027  369869 api_server.go:72] duration metric: took 1.519826457s to wait for apiserver process to appear ...
	I0229 02:31:12.732056  369869 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:12.732078  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.109299  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:15.109349  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:15.109368  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.166169  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:15.166209  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:15.232359  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.267052  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:15.267099  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.096073  369508 start.go:369] acquired machines lock for "embed-certs-915633" in 58.856797615s
	I0229 02:31:16.096132  369508 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:31:16.096144  369508 fix.go:54] fixHost starting: 
	I0229 02:31:16.096651  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:31:16.096692  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:31:16.115912  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
	I0229 02:31:16.116419  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:31:16.116967  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:31:16.116999  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:31:16.117362  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:31:16.117562  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:16.117742  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:31:16.119589  369508 fix.go:102] recreateIfNeeded on embed-certs-915633: state=Stopped err=<nil>
	I0229 02:31:16.119614  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	W0229 02:31:16.119809  369508 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:31:16.121566  369508 out.go:177] * Restarting existing kvm2 VM for "embed-certs-915633" ...
	I0229 02:31:14.842498  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843049  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has current primary IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.843083  370051 main.go:141] libmachine: (old-k8s-version-275488) Found IP for machine: 192.168.39.160
	I0229 02:31:14.843112  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserving static IP address...
	I0229 02:31:14.843485  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.843510  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | skip adding static IP to network mk-old-k8s-version-275488 - found existing host DHCP lease matching {name: "old-k8s-version-275488", mac: "52:54:00:6c:fc:74", ip: "192.168.39.160"}
	I0229 02:31:14.843525  370051 main.go:141] libmachine: (old-k8s-version-275488) Reserved static IP address: 192.168.39.160
	I0229 02:31:14.843535  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Getting to WaitForSSH function...
	I0229 02:31:14.843553  370051 main.go:141] libmachine: (old-k8s-version-275488) Waiting for SSH to be available...
	I0229 02:31:14.845739  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846087  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.846120  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.846289  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH client type: external
	I0229 02:31:14.846319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa (-rw-------)
	I0229 02:31:14.846355  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:31:14.846372  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | About to run SSH command:
	I0229 02:31:14.846390  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | exit 0
	I0229 02:31:14.979384  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | SSH cmd err, output: <nil>: 
	I0229 02:31:14.979896  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetConfigRaw
	I0229 02:31:14.980716  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:14.983852  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984278  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.984319  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.984639  370051 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/config.json ...
	I0229 02:31:14.984865  370051 machine.go:88] provisioning docker machine ...
	I0229 02:31:14.984890  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:14.985140  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985324  370051 buildroot.go:166] provisioning hostname "old-k8s-version-275488"
	I0229 02:31:14.985347  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:14.985494  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:14.988036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988438  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:14.988464  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:14.988633  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:14.988829  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989003  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:14.989174  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:14.989361  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:14.989604  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:14.989621  370051 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-275488 && echo "old-k8s-version-275488" | sudo tee /etc/hostname
	I0229 02:31:15.125564  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-275488
	
	I0229 02:31:15.125605  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.128963  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129570  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.129652  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.129735  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.129996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130185  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.130380  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.130616  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.130872  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.130900  370051 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-275488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-275488/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-275488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:15.272298  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:15.272337  370051 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:31:15.272368  370051 buildroot.go:174] setting up certificates
	I0229 02:31:15.272385  370051 provision.go:83] configureAuth start
	I0229 02:31:15.272402  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetMachineName
	I0229 02:31:15.272772  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:15.276382  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.276838  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.276869  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.277051  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.279927  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280298  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.280326  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.280505  370051 provision.go:138] copyHostCerts
	I0229 02:31:15.280555  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:31:15.280566  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:31:15.280619  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:31:15.280749  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:31:15.280764  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:31:15.280789  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:31:15.280857  370051 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:31:15.280871  370051 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:31:15.280891  370051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:31:15.280954  370051 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-275488 san=[192.168.39.160 192.168.39.160 localhost 127.0.0.1 minikube old-k8s-version-275488]
	I0229 02:31:15.360428  370051 provision.go:172] copyRemoteCerts
	I0229 02:31:15.360487  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:31:15.360512  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.363540  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.363931  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.363966  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.364154  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.364337  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.364495  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.364622  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.453643  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:31:15.483233  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 02:31:15.512164  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:31:15.543453  370051 provision.go:86] duration metric: configureAuth took 271.048547ms
	I0229 02:31:15.543484  370051 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:31:15.543705  370051 config.go:182] Loaded profile config "old-k8s-version-275488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0229 02:31:15.543816  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.546472  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.546807  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.546835  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.547049  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.547272  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547455  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.547662  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.547861  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.548035  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.548052  370051 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:31:15.835533  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:31:15.835572  370051 machine.go:91] provisioned docker machine in 850.691497ms
	I0229 02:31:15.835589  370051 start.go:300] post-start starting for "old-k8s-version-275488" (driver="kvm2")
	I0229 02:31:15.835604  370051 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:31:15.835635  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:15.835995  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:31:15.836025  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.838946  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839297  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.839330  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.839460  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.839665  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.839839  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.840008  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.925849  370051 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:31:15.931227  370051 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:31:15.931260  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:31:15.931363  370051 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:31:15.931465  370051 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:31:15.931574  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:31:15.942500  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:15.972803  370051 start.go:303] post-start completed in 137.19736ms
	I0229 02:31:15.972838  370051 fix.go:56] fixHost completed within 24.084893996s
	I0229 02:31:15.972873  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:15.975698  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976063  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:15.976093  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:15.976279  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:15.976518  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976659  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:15.976795  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:15.976959  370051 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:15.977119  370051 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I0229 02:31:15.977130  370051 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:31:16.095892  370051 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173876.041987567
	
	I0229 02:31:16.095917  370051 fix.go:206] guest clock: 1709173876.041987567
	I0229 02:31:16.095927  370051 fix.go:219] Guest: 2024-02-29 02:31:16.041987567 +0000 UTC Remote: 2024-02-29 02:31:15.972843681 +0000 UTC m=+279.886639354 (delta=69.143886ms)
	I0229 02:31:16.095954  370051 fix.go:190] guest clock delta is within tolerance: 69.143886ms
	I0229 02:31:16.095962  370051 start.go:83] releasing machines lock for "old-k8s-version-275488", held for 24.208056775s
	I0229 02:31:16.095996  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.096336  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:16.099518  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100016  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.100060  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.100189  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100751  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.100955  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .DriverName
	I0229 02:31:16.101035  370051 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:31:16.101084  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.101167  370051 ssh_runner.go:195] Run: cat /version.json
	I0229 02:31:16.101190  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHHostname
	I0229 02:31:16.104588  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.104638  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105000  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105036  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105059  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:16.105101  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:16.105335  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105546  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHPort
	I0229 02:31:16.105590  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105821  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHKeyPath
	I0229 02:31:16.105832  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106002  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:16.106028  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetSSHUsername
	I0229 02:31:16.106180  370051 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/old-k8s-version-275488/id_rsa Username:docker}
	I0229 02:31:15.732828  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:15.739797  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:15.739827  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.232355  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:16.240421  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:16.240462  369869 api_server.go:103] status: https://192.168.61.233:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:16.732451  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:31:16.740118  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 200:
	ok
	I0229 02:31:16.748529  369869 api_server.go:141] control plane version: v1.28.4
	I0229 02:31:16.748567  369869 api_server.go:131] duration metric: took 4.0165029s to wait for apiserver health ...
	I0229 02:31:16.748580  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:31:16.748588  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:16.750561  369869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:16.194120  370051 ssh_runner.go:195] Run: systemctl --version
	I0229 02:31:16.220808  370051 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:31:16.386082  370051 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:31:16.393419  370051 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:31:16.393512  370051 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:31:16.418966  370051 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:31:16.419003  370051 start.go:475] detecting cgroup driver to use...
	I0229 02:31:16.419087  370051 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:31:16.444372  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:31:16.466354  370051 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:31:16.466430  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:31:16.488710  370051 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:31:16.509561  370051 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:31:16.651716  370051 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:31:16.840453  370051 docker.go:233] disabling docker service ...
	I0229 02:31:16.840538  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:31:16.869611  370051 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:31:16.890123  370051 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:31:17.047701  370051 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:31:17.225457  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:31:17.248553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:31:17.275486  370051 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0229 02:31:17.275572  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.290350  370051 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:31:17.290437  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.304093  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.320562  370051 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:17.339790  370051 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:31:17.356570  370051 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:31:17.371208  370051 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:31:17.371303  370051 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:31:17.390748  370051 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:31:17.405750  370051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:31:17.555023  370051 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:31:17.754419  370051 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:31:17.754508  370051 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:31:17.760190  370051 start.go:543] Will wait 60s for crictl version
	I0229 02:31:17.760245  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:17.765195  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:31:17.815839  370051 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:31:17.815953  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.857470  370051 ssh_runner.go:195] Run: crio --version
	I0229 02:31:17.896796  370051 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.29.1 ...
	I0229 02:31:13.906892  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:15.907106  369591 node_ready.go:58] node "no-preload-247751" has status "Ready":"False"
	I0229 02:31:16.914513  369591 node_ready.go:49] node "no-preload-247751" has status "Ready":"True"
	I0229 02:31:16.914545  369591 node_ready.go:38] duration metric: took 7.511932085s waiting for node "no-preload-247751" to be "Ready" ...
	I0229 02:31:16.914560  369591 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:16.925133  369591 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.940518  369591 pod_ready.go:92] pod "coredns-76f75df574-2z5w8" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:16.940553  369591 pod_ready.go:81] duration metric: took 15.382701ms waiting for pod "coredns-76f75df574-2z5w8" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.940568  369591 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:16.122967  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Start
	I0229 02:31:16.123141  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring networks are active...
	I0229 02:31:16.124019  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring network default is active
	I0229 02:31:16.124630  369508 main.go:141] libmachine: (embed-certs-915633) Ensuring network mk-embed-certs-915633 is active
	I0229 02:31:16.125118  369508 main.go:141] libmachine: (embed-certs-915633) Getting domain xml...
	I0229 02:31:16.126026  369508 main.go:141] libmachine: (embed-certs-915633) Creating domain...
	I0229 02:31:17.664537  369508 main.go:141] libmachine: (embed-certs-915633) Waiting to get IP...
	I0229 02:31:17.665883  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:17.666462  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:17.666595  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:17.666455  371066 retry.go:31] will retry after 193.172159ms: waiting for machine to come up
	I0229 02:31:17.861043  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:17.861754  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:17.861781  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:17.861651  371066 retry.go:31] will retry after 298.133474ms: waiting for machine to come up
	I0229 02:31:18.161304  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:18.161851  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:18.161886  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:18.161818  371066 retry.go:31] will retry after 402.680342ms: waiting for machine to come up
	I0229 02:31:18.566482  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:18.567145  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:18.567165  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:18.567068  371066 retry.go:31] will retry after 536.886613ms: waiting for machine to come up
	I0229 02:31:19.106090  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:19.106797  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:19.106823  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:19.106714  371066 retry.go:31] will retry after 583.032631ms: waiting for machine to come up
	I0229 02:31:19.691531  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:19.692096  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:19.692127  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:19.692000  371066 retry.go:31] will retry after 780.156818ms: waiting for machine to come up
	I0229 02:31:16.752375  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:31:16.783785  369869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:31:16.816646  369869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:31:16.829430  369869 system_pods.go:59] 8 kube-system pods found
	I0229 02:31:16.829480  369869 system_pods.go:61] "coredns-5dd5756b68-652db" [d989183e-dc0d-4913-8eab-fdfac0cf7ad7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:31:16.829491  369869 system_pods.go:61] "etcd-default-k8s-diff-port-071485" [aba29f47-cf0e-4ee5-8d18-7647b36369e4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:31:16.829501  369869 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071485" [26a426b2-d5b7-456e-a733-3317009974ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:31:16.829517  369869 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071485" [a896f9fa-991f-44bb-bd97-02fac3494eea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:31:16.829528  369869 system_pods.go:61] "kube-proxy-g976s" [bc750be0-ae2b-4033-b65b-f1cccaebf32f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:31:16.829536  369869 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071485" [d99d25bf-25f4-4057-aedb-fc5ba797af47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:31:16.829544  369869 system_pods.go:61] "metrics-server-57f55c9bc5-86frx" [0ad81c0d-3f9a-45d8-93d8-bbb9e276d5b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:31:16.829560  369869 system_pods.go:61] "storage-provisioner" [92683c3e-04c1-4cef-988d-3b8beb7d4399] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:31:16.829570  369869 system_pods.go:74] duration metric: took 12.896339ms to wait for pod list to return data ...
	I0229 02:31:16.829584  369869 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:31:16.837494  369869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:31:16.837524  369869 node_conditions.go:123] node cpu capacity is 2
	I0229 02:31:16.837535  369869 node_conditions.go:105] duration metric: took 7.942051ms to run NodePressure ...
	I0229 02:31:16.837560  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:17.293873  369869 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:31:17.300874  369869 kubeadm.go:787] kubelet initialised
	I0229 02:31:17.300907  369869 kubeadm.go:788] duration metric: took 7.00259ms waiting for restarted kubelet to initialise ...
	I0229 02:31:17.300919  369869 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:31:17.315838  369869 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-652db" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.328228  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "coredns-5dd5756b68-652db" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.328265  369869 pod_ready.go:81] duration metric: took 12.396088ms waiting for pod "coredns-5dd5756b68-652db" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.328278  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "coredns-5dd5756b68-652db" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.328287  369869 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.335458  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.335487  369869 pod_ready.go:81] duration metric: took 7.145351ms waiting for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.335497  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.335505  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:17.356278  369869 pod_ready.go:97] node "default-k8s-diff-port-071485" hosting pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.356365  369869 pod_ready.go:81] duration metric: took 20.849982ms waiting for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	E0229 02:31:17.356385  369869 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-071485" hosting pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-071485" has status "Ready":"False"
	I0229 02:31:17.356396  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:19.376170  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:17.898162  370051 main.go:141] libmachine: (old-k8s-version-275488) Calling .GetIP
	I0229 02:31:17.901332  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.901809  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:fc:74", ip: ""} in network mk-old-k8s-version-275488: {Iface:virbr2 ExpiryTime:2024-02-29 03:20:40 +0000 UTC Type:0 Mac:52:54:00:6c:fc:74 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:old-k8s-version-275488 Clientid:01:52:54:00:6c:fc:74}
	I0229 02:31:17.901840  370051 main.go:141] libmachine: (old-k8s-version-275488) DBG | domain old-k8s-version-275488 has defined IP address 192.168.39.160 and MAC address 52:54:00:6c:fc:74 in network mk-old-k8s-version-275488
	I0229 02:31:17.902046  370051 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:31:17.907256  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:17.924135  370051 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 02:31:17.924218  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:17.986923  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:17.986992  370051 ssh_runner.go:195] Run: which lz4
	I0229 02:31:17.992110  370051 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:31:17.997252  370051 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:31:17.997287  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0229 02:31:20.124958  370051 crio.go:444] Took 2.132885 seconds to copy over tarball
	I0229 02:31:20.125075  370051 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:31:18.948383  369591 pod_ready.go:102] pod "etcd-no-preload-247751" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:20.950330  369591 pod_ready.go:92] pod "etcd-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:20.950359  369591 pod_ready.go:81] duration metric: took 4.009782336s waiting for pod "etcd-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:20.950372  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.460878  369591 pod_ready.go:92] pod "kube-apiserver-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.460907  369591 pod_ready.go:81] duration metric: took 1.510525429s waiting for pod "kube-apiserver-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.460922  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.468463  369591 pod_ready.go:92] pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.468487  369591 pod_ready.go:81] duration metric: took 7.556807ms waiting for pod "kube-controller-manager-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.468497  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.476459  369591 pod_ready.go:92] pod "kube-proxy-cdc4l" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.476488  369591 pod_ready.go:81] duration metric: took 7.983254ms waiting for pod "kube-proxy-cdc4l" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.476501  369591 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.482564  369591 pod_ready.go:92] pod "kube-scheduler-no-preload-247751" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:22.482589  369591 pod_ready.go:81] duration metric: took 6.080532ms waiting for pod "kube-scheduler-no-preload-247751" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:22.482598  369591 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:20.474186  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:20.474741  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:20.474784  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:20.474647  371066 retry.go:31] will retry after 845.550951ms: waiting for machine to come up
	I0229 02:31:21.322246  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:21.323007  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:21.323031  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:21.322935  371066 retry.go:31] will retry after 1.085864892s: waiting for machine to come up
	I0229 02:31:22.410244  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:22.410735  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:22.410766  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:22.410687  371066 retry.go:31] will retry after 1.587558593s: waiting for machine to come up
	I0229 02:31:24.000303  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:24.000914  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:24.000944  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:24.000828  371066 retry.go:31] will retry after 2.058374822s: waiting for machine to come up
	I0229 02:31:21.867552  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:23.972250  369869 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:23.981829  369869 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:23.981860  369869 pod_ready.go:81] duration metric: took 6.625453699s waiting for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.981875  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-g976s" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.994568  369869 pod_ready.go:92] pod "kube-proxy-g976s" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:23.994597  369869 pod_ready.go:81] duration metric: took 12.712769ms waiting for pod "kube-proxy-g976s" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.994609  369869 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:24.002085  369869 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:31:24.002110  369869 pod_ready.go:81] duration metric: took 7.492788ms waiting for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:24.002133  369869 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" ...
	I0229 02:31:23.625489  370051 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.500380961s)
	I0229 02:31:23.625526  370051 crio.go:451] Took 3.500531 seconds to extract the tarball
	I0229 02:31:23.625536  370051 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:31:23.671458  370051 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:23.714048  370051 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0229 02:31:23.714087  370051 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 02:31:23.714189  370051 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.714213  370051 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.714309  370051 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 02:31:23.714424  370051 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.714269  370051 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.714461  370051 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.714519  370051 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.714192  370051 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.716086  370051 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.716077  370051 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.716076  370051 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.716088  370051 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:23.716143  370051 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.716081  370051 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.716275  370051 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 02:31:23.838722  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.844569  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 02:31:23.853089  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:23.857738  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 02:31:23.864060  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:23.865519  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:23.926256  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:23.997349  370051 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 02:31:23.997401  370051 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:23.997463  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.010625  370051 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 02:31:24.010674  370051 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0229 02:31:24.010722  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083140  370051 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 02:31:24.083203  370051 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 02:31:24.083232  370051 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 02:31:24.083247  370051 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.083266  370051 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.083269  370051 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.083308  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083319  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083364  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.083166  370051 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 02:31:24.083426  370051 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.083471  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123878  370051 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 02:31:24.123928  370051 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.123972  370051 ssh_runner.go:195] Run: which crictl
	I0229 02:31:24.123982  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 02:31:24.123973  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0229 02:31:24.124043  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 02:31:24.124051  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 02:31:24.124097  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0229 02:31:24.124153  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 02:31:24.152226  370051 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0229 02:31:24.270585  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 02:31:24.305436  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 02:31:24.305532  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 02:31:24.305621  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 02:31:24.305629  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 02:31:24.305799  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 02:31:24.316950  370051 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 02:31:24.635837  370051 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:24.791670  370051 cache_images.go:92] LoadImages completed in 1.077558745s
	W0229 02:31:24.791798  370051 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-316644/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0229 02:31:24.791902  370051 ssh_runner.go:195] Run: crio config
	I0229 02:31:24.851132  370051 cni.go:84] Creating CNI manager for ""
	I0229 02:31:24.851164  370051 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:24.851189  370051 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:31:24.851213  370051 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-275488 NodeName:old-k8s-version-275488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 02:31:24.851423  370051 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-275488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.160
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-275488
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.160:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:31:24.851524  370051 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-275488 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:31:24.851598  370051 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 02:31:24.864237  370051 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:31:24.864330  370051 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:31:24.879552  370051 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0229 02:31:24.901027  370051 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:31:24.920638  370051 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2181 bytes)
	I0229 02:31:24.941894  370051 ssh_runner.go:195] Run: grep 192.168.39.160	control-plane.minikube.internal$ /etc/hosts
	I0229 02:31:24.947439  370051 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:24.962396  370051 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488 for IP: 192.168.39.160
	I0229 02:31:24.962435  370051 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:24.962621  370051 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:31:24.962673  370051 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:31:24.962781  370051 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/client.key
	I0229 02:31:24.962851  370051 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key.80b25619
	I0229 02:31:24.962919  370051 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key
	I0229 02:31:24.963087  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:31:24.963126  370051 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:31:24.963138  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:31:24.963185  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:31:24.963213  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:31:24.963245  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:31:24.963296  370051 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:24.963980  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:31:24.996049  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:31:25.030503  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:31:25.057695  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/old-k8s-version-275488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:31:25.091982  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:31:25.126636  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:31:25.156613  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:31:25.186480  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:31:25.221012  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:31:25.254122  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:31:25.282646  370051 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:31:25.312624  370051 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:31:25.335020  370051 ssh_runner.go:195] Run: openssl version
	I0229 02:31:25.342920  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:31:25.355808  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361349  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.361433  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:31:25.368335  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:31:25.380799  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:31:25.393069  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398466  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.398539  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:31:25.404776  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:31:25.416735  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:31:25.428884  370051 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434503  370051 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.434584  370051 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:25.441187  370051 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:31:25.453174  370051 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:31:25.458712  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:31:25.466032  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:31:25.473895  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:31:25.482948  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:31:25.491808  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:31:25.499003  370051 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:31:25.506691  370051 kubeadm.go:404] StartCluster: {Name:old-k8s-version-275488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-275488 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:25.506829  370051 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:31:25.506883  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:25.551867  370051 cri.go:89] found id: ""
	I0229 02:31:25.551970  370051 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:31:25.564446  370051 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:31:25.564476  370051 kubeadm.go:636] restartCluster start
	I0229 02:31:25.564545  370051 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:31:25.576275  370051 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:25.577406  370051 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-275488" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:31:25.578043  370051 kubeconfig.go:146] "old-k8s-version-275488" context is missing from /home/jenkins/minikube-integration/18063-316644/kubeconfig - will repair!
	I0229 02:31:25.578979  370051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:25.580805  370051 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:31:25.592154  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:25.592259  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:25.609268  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:26.092701  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.092827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.108636  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:24.491508  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.492827  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:28.496040  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.062093  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:26.062582  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:26.062612  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:26.062525  371066 retry.go:31] will retry after 2.231071357s: waiting for machine to come up
	I0229 02:31:28.295693  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:28.296180  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:28.296214  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:28.296116  371066 retry.go:31] will retry after 2.376277578s: waiting for machine to come up
	I0229 02:31:26.010834  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:28.031628  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:26.592320  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:26.592412  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:26.606907  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.092891  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.093028  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.112353  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:27.592956  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:27.593058  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:27.612315  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.092611  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.092729  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.108095  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:28.592592  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:28.592679  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:28.612145  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.092605  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.092720  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.113807  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:29.593002  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:29.593085  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:29.609337  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.092667  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.092757  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.112800  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.592328  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:30.592415  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:30.610909  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:31.092418  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.092529  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.109490  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:30.990551  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:32.990785  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:30.675432  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:30.675962  369508 main.go:141] libmachine: (embed-certs-915633) DBG | unable to find current IP address of domain embed-certs-915633 in network mk-embed-certs-915633
	I0229 02:31:30.675995  369508 main.go:141] libmachine: (embed-certs-915633) DBG | I0229 02:31:30.675901  371066 retry.go:31] will retry after 4.442717853s: waiting for machine to come up
	I0229 02:31:30.511576  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:32.515611  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:35.010325  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:31.593046  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:31.593128  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:31.608148  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.092187  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.092299  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.107573  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:32.593184  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:32.593312  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:32.607993  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.092500  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.092603  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.107359  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:33.592987  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:33.593101  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:33.608041  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.092919  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.093023  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.107597  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:34.593200  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:34.593295  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:34.608100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.092589  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.092683  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.107100  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.592815  370051 api_server.go:166] Checking apiserver status ...
	I0229 02:31:35.592928  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:35.610879  370051 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:35.610916  370051 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:35.610930  370051 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:35.610947  370051 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:35.611032  370051 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:35.660059  370051 cri.go:89] found id: ""
	I0229 02:31:35.660146  370051 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:35.682067  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:35.694455  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:35.694542  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707118  370051 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:35.707149  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:35.834811  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:35.123364  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.123906  369508 main.go:141] libmachine: (embed-certs-915633) Found IP for machine: 192.168.50.218
	I0229 02:31:35.123925  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has current primary IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.123931  369508 main.go:141] libmachine: (embed-certs-915633) Reserving static IP address...
	I0229 02:31:35.124398  369508 main.go:141] libmachine: (embed-certs-915633) Reserved static IP address: 192.168.50.218
	I0229 02:31:35.124423  369508 main.go:141] libmachine: (embed-certs-915633) Waiting for SSH to be available...
	I0229 02:31:35.124441  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "embed-certs-915633", mac: "52:54:00:26:ca:ce", ip: "192.168.50.218"} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.124468  369508 main.go:141] libmachine: (embed-certs-915633) DBG | skip adding static IP to network mk-embed-certs-915633 - found existing host DHCP lease matching {name: "embed-certs-915633", mac: "52:54:00:26:ca:ce", ip: "192.168.50.218"}
	I0229 02:31:35.124487  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Getting to WaitForSSH function...
	I0229 02:31:35.126676  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.127004  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.127035  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.127137  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Using SSH client type: external
	I0229 02:31:35.127168  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa (-rw-------)
	I0229 02:31:35.127199  369508 main.go:141] libmachine: (embed-certs-915633) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.218 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:31:35.127213  369508 main.go:141] libmachine: (embed-certs-915633) DBG | About to run SSH command:
	I0229 02:31:35.127224  369508 main.go:141] libmachine: (embed-certs-915633) DBG | exit 0
	I0229 02:31:35.251075  369508 main.go:141] libmachine: (embed-certs-915633) DBG | SSH cmd err, output: <nil>: 
	I0229 02:31:35.251474  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetConfigRaw
	I0229 02:31:35.252256  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:35.254934  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.255350  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.255378  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.255676  369508 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/config.json ...
	I0229 02:31:35.255881  369508 machine.go:88] provisioning docker machine ...
	I0229 02:31:35.255905  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:35.256154  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.256344  369508 buildroot.go:166] provisioning hostname "embed-certs-915633"
	I0229 02:31:35.256369  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.256506  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.258794  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.259163  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.259186  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.259337  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.259551  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.259716  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.259875  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.260066  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.260256  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.260269  369508 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-915633 && echo "embed-certs-915633" | sudo tee /etc/hostname
	I0229 02:31:35.383734  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-915633
	
	I0229 02:31:35.383770  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.386559  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.386913  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.386944  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.387121  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.387359  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.387631  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.387815  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.387979  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.388158  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.388175  369508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-915633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-915633/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-915633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:31:35.521449  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:31:35.521490  369508 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:31:35.521530  369508 buildroot.go:174] setting up certificates
	I0229 02:31:35.521544  369508 provision.go:83] configureAuth start
	I0229 02:31:35.521573  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetMachineName
	I0229 02:31:35.521923  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:35.524829  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.525193  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.525217  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.525348  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.527582  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.527980  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.528012  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.528164  369508 provision.go:138] copyHostCerts
	I0229 02:31:35.528216  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:31:35.528234  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:31:35.528290  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:31:35.528384  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:31:35.528396  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:31:35.528415  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:31:35.528514  369508 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:31:35.528525  369508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:31:35.528544  369508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:31:35.528591  369508 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.embed-certs-915633 san=[192.168.50.218 192.168.50.218 localhost 127.0.0.1 minikube embed-certs-915633]
	I0229 02:31:35.778616  369508 provision.go:172] copyRemoteCerts
	I0229 02:31:35.778679  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:31:35.778706  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.782134  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.782605  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.782640  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.782833  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.783103  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.783305  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.783522  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:35.870506  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:31:35.904595  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:31:35.936515  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 02:31:35.966505  369508 provision.go:86] duration metric: configureAuth took 444.939951ms
	I0229 02:31:35.966539  369508 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:31:35.966725  369508 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:31:35.966831  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:35.969731  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.970133  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:35.970176  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:35.970402  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:35.970623  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.970788  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:35.970968  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:35.971139  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:35.971382  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:35.971401  369508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:31:36.262676  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:31:36.262719  369508 machine.go:91] provisioned docker machine in 1.00682197s
	I0229 02:31:36.262731  369508 start.go:300] post-start starting for "embed-certs-915633" (driver="kvm2")
	I0229 02:31:36.262743  369508 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:31:36.262765  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.263140  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:31:36.263179  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.265718  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.266095  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.266126  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.266278  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.266486  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.266658  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.266795  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.359474  369508 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:31:36.365071  369508 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:31:36.365110  369508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:31:36.365202  369508 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:31:36.365279  369508 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:31:36.365395  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:31:36.376823  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:36.406525  369508 start.go:303] post-start completed in 143.75518ms
	I0229 02:31:36.406588  369508 fix.go:56] fixHost completed within 20.310442727s
	I0229 02:31:36.406619  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.409415  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.409840  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.409875  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.410009  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.410214  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.410412  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.410567  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.410715  369508 main.go:141] libmachine: Using SSH client type: native
	I0229 02:31:36.410936  369508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.218 22 <nil> <nil>}
	I0229 02:31:36.410950  369508 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:31:36.520508  369508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709173896.494400897
	
	I0229 02:31:36.520543  369508 fix.go:206] guest clock: 1709173896.494400897
	I0229 02:31:36.520555  369508 fix.go:219] Guest: 2024-02-29 02:31:36.494400897 +0000 UTC Remote: 2024-02-29 02:31:36.406594326 +0000 UTC m=+361.755087901 (delta=87.806571ms)
	I0229 02:31:36.520584  369508 fix.go:190] guest clock delta is within tolerance: 87.806571ms
	I0229 02:31:36.520597  369508 start.go:83] releasing machines lock for "embed-certs-915633", held for 20.424490067s
	I0229 02:31:36.520629  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.520949  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:36.523819  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.524146  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.524185  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.524359  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.524912  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.525109  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:31:36.525206  369508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:31:36.525251  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.525332  369508 ssh_runner.go:195] Run: cat /version.json
	I0229 02:31:36.525360  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:31:36.528265  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528470  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528614  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.528641  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.528826  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:36.528829  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.528855  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:36.529047  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:31:36.529135  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.529253  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:31:36.529321  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.529414  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:31:36.529478  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.529556  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:31:36.611757  369508 ssh_runner.go:195] Run: systemctl --version
	I0229 02:31:36.638875  369508 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:31:36.786219  369508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:31:36.798964  369508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:31:36.799056  369508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:31:36.817942  369508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:31:36.817975  369508 start.go:475] detecting cgroup driver to use...
	I0229 02:31:36.818086  369508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:31:36.837019  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:31:36.855078  369508 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:31:36.855159  369508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:31:36.873444  369508 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:31:36.891708  369508 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:31:37.031928  369508 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:31:37.212859  369508 docker.go:233] disabling docker service ...
	I0229 02:31:37.212960  369508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:31:37.235232  369508 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:31:37.253901  369508 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:31:37.401366  369508 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:31:37.530791  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:31:37.547864  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:31:37.570344  369508 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:31:37.570416  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.582275  369508 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:31:37.582345  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.593628  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.605168  369508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:31:37.616567  369508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:31:37.628153  369508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:31:37.638579  369508 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:31:37.638640  369508 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:31:37.652738  369508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:31:37.664118  369508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:31:37.785330  369508 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:31:37.933006  369508 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:31:37.933095  369508 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:31:37.938625  369508 start.go:543] Will wait 60s for crictl version
	I0229 02:31:37.938702  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:31:37.943285  369508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:31:37.984992  369508 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:31:37.985105  369508 ssh_runner.go:195] Run: crio --version
	I0229 02:31:38.018467  369508 ssh_runner.go:195] Run: crio --version
	I0229 02:31:38.051472  369508 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0229 02:31:34.991345  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:36.991987  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:38.052850  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetIP
	I0229 02:31:38.055688  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:38.055970  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:31:38.056006  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:31:38.056253  369508 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 02:31:38.060925  369508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:38.076126  369508 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 02:31:38.076197  369508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:38.116261  369508 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0229 02:31:38.116372  369508 ssh_runner.go:195] Run: which lz4
	I0229 02:31:38.121080  369508 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:31:38.125711  369508 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:31:38.125755  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0229 02:31:37.012008  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:39.018348  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:36.790885  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.042778  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.130251  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:37.215289  370051 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:37.215384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:37.715589  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.715938  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.215781  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:39.716505  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.216238  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:40.716182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:38.992988  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:41.491712  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:43.492458  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:40.139859  369508 crio.go:444] Took 2.018817 seconds to copy over tarball
	I0229 02:31:40.139953  369508 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:31:43.071745  369508 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.931752333s)
	I0229 02:31:43.071797  369508 crio.go:451] Took 2.931905 seconds to extract the tarball
	I0229 02:31:43.071809  369508 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:31:43.118127  369508 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:31:43.171147  369508 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:31:43.171176  369508 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:31:43.171262  369508 ssh_runner.go:195] Run: crio config
	I0229 02:31:43.232177  369508 cni.go:84] Creating CNI manager for ""
	I0229 02:31:43.232203  369508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:31:43.232229  369508 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 02:31:43.232247  369508 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.218 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-915633 NodeName:embed-certs-915633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.218"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:31:43.232419  369508 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-915633"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.218"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:31:43.232519  369508 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-915633 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-915633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:31:43.232596  369508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 02:31:43.244392  369508 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:31:43.244467  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:31:43.256293  369508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0229 02:31:43.275397  369508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 02:31:43.295494  369508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0229 02:31:43.316812  369508 ssh_runner.go:195] Run: grep 192.168.50.218	control-plane.minikube.internal$ /etc/hosts
	I0229 02:31:43.321496  369508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.218	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:31:43.335055  369508 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633 for IP: 192.168.50.218
	I0229 02:31:43.335092  369508 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:31:43.335270  369508 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:31:43.335316  369508 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:31:43.335388  369508 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/client.key
	I0229 02:31:43.335442  369508 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.key.cc0da009
	I0229 02:31:43.335475  369508 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.key
	I0229 02:31:43.335584  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:31:43.335610  369508 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:31:43.335619  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:31:43.335642  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:31:43.335673  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:31:43.335710  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:31:43.335779  369508 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:31:43.336455  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:31:43.364985  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 02:31:43.394189  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:31:43.424515  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/embed-certs-915633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:31:43.456589  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:31:43.486396  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:31:43.516931  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:31:43.546421  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:31:43.578923  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:31:43.608333  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:31:43.637196  369508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:31:43.667522  369508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:31:43.688266  369508 ssh_runner.go:195] Run: openssl version
	I0229 02:31:43.695616  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:31:43.709892  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.715346  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.715426  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:31:43.722688  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:31:43.735866  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:31:43.749967  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.757599  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.757671  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:31:43.765157  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:31:43.779671  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:31:43.792900  369508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.798505  369508 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.798576  369508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:31:43.805192  369508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:31:43.818233  369508 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:31:43.823681  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:31:43.831016  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:31:43.837899  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:31:43.844802  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:31:43.851881  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:31:43.858689  369508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:31:43.865749  369508 kubeadm.go:404] StartCluster: {Name:embed-certs-915633 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-915633 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:31:43.865852  369508 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:31:43.865925  369508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:43.906012  369508 cri.go:89] found id: ""
	I0229 02:31:43.906116  369508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:31:43.918241  369508 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:31:43.918265  369508 kubeadm.go:636] restartCluster start
	I0229 02:31:43.918349  369508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:31:43.930524  369508 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:43.931550  369508 kubeconfig.go:92] found "embed-certs-915633" server: "https://192.168.50.218:8443"
	I0229 02:31:43.933612  369508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:31:43.944469  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:43.944519  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:43.958194  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:44.444746  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:44.444840  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:44.458567  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:41.510364  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:43.511424  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:41.216236  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:41.716082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.215537  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:42.715524  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.215873  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:43.715634  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.216464  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:44.715519  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.216430  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.716196  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:45.990995  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:48.489390  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:44.944934  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.003707  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.018797  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:45.445348  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.445435  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.460199  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:45.944750  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:45.944879  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:45.959309  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.445218  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:46.445313  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:46.459195  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.945456  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:46.945538  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:46.959212  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:47.444711  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:47.444819  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:47.459189  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:47.944651  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:47.944726  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:47.958733  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:48.445008  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:48.445100  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:48.460126  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:48.944649  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:48.944731  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:48.959993  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:49.444545  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:49.444628  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:49.458889  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:46.011657  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:48.508465  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:46.215715  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:46.715657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.216495  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:47.715491  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.215459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:48.715556  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.215675  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:49.716046  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.215993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.715594  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:50.489578  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:52.990638  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:49.945108  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:49.945265  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:49.960625  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:50.444843  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:50.444923  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:50.459329  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:50.944871  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:50.944963  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:50.959583  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:51.444601  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:51.444704  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:51.462037  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:51.944573  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:51.944658  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:51.958538  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:52.445111  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:52.445269  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:52.462902  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:52.945088  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:52.945182  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:52.960241  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.444649  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:53.444738  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:53.458642  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.945214  369508 api_server.go:166] Checking apiserver status ...
	I0229 02:31:53.945291  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:31:53.960552  369508 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:31:53.960588  369508 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:31:53.960600  369508 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:31:53.960615  369508 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:31:53.960671  369508 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:31:54.005230  369508 cri.go:89] found id: ""
	I0229 02:31:54.005321  369508 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:31:54.027544  369508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:31:54.040517  369508 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:31:54.040577  369508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:54.051200  369508 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:31:54.051223  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:54.168817  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:50.509119  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:52.509526  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:54.511540  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:51.215927  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:51.715888  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.215659  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:52.715769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.216175  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:53.715755  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.216468  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.216280  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:55.715924  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:54.992721  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:57.490570  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:55.091652  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.346578  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.443373  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:31:55.542444  369508 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:31:55.542562  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.042870  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.542972  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.571776  369508 api_server.go:72] duration metric: took 1.029332492s to wait for apiserver process to appear ...
	I0229 02:31:56.571808  369508 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:31:56.571831  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:56.572606  369508 api_server.go:269] stopped: https://192.168.50.218:8443/healthz: Get "https://192.168.50.218:8443/healthz": dial tcp 192.168.50.218:8443: connect: connection refused
	I0229 02:31:57.072145  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.557011  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:59.557048  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:59.557066  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.609944  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:31:59.610010  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:31:59.610028  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:31:59.669911  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:31:59.669955  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:31:57.010655  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:31:59.510097  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:00.071971  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:00.084661  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:32:00.084690  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:32:00.572262  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:00.577772  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:32:00.577807  369508 api_server.go:103] status: https://192.168.50.218:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:32:01.072371  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:32:01.077306  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0229 02:32:01.084492  369508 api_server.go:141] control plane version: v1.28.4
	I0229 02:32:01.084531  369508 api_server.go:131] duration metric: took 4.512702749s to wait for apiserver health ...
	I0229 02:32:01.084544  369508 cni.go:84] Creating CNI manager for ""
	I0229 02:32:01.084554  369508 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:32:01.086337  369508 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:31:56.215653  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:56.715898  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.215954  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:57.715645  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.216366  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:58.716093  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.215944  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:31:59.715553  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.216341  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:00.715677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.087584  369508 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:32:01.099724  369508 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:32:01.122381  369508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:32:01.133632  369508 system_pods.go:59] 8 kube-system pods found
	I0229 02:32:01.133674  369508 system_pods.go:61] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:32:01.133684  369508 system_pods.go:61] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:32:01.133697  369508 system_pods.go:61] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:32:01.133710  369508 system_pods.go:61] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:32:01.133720  369508 system_pods.go:61] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:32:01.133728  369508 system_pods.go:61] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:32:01.133738  369508 system_pods.go:61] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:32:01.133746  369508 system_pods.go:61] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:32:01.133755  369508 system_pods.go:74] duration metric: took 11.346225ms to wait for pod list to return data ...
	I0229 02:32:01.133767  369508 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:32:01.138716  369508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:32:01.138746  369508 node_conditions.go:123] node cpu capacity is 2
	I0229 02:32:01.138760  369508 node_conditions.go:105] duration metric: took 4.985648ms to run NodePressure ...
	I0229 02:32:01.138783  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:32:01.368503  369508 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 02:32:01.373648  369508 kubeadm.go:787] kubelet initialised
	I0229 02:32:01.373669  369508 kubeadm.go:788] duration metric: took 5.137378ms waiting for restarted kubelet to initialise ...
	I0229 02:32:01.373677  369508 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:01.379649  369508 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.384724  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.384750  369508 pod_ready.go:81] duration metric: took 5.071017ms waiting for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.384758  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.384765  369508 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.390019  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "etcd-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.390048  369508 pod_ready.go:81] duration metric: took 5.27491ms waiting for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.390059  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "etcd-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.390067  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.396275  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.396294  369508 pod_ready.go:81] duration metric: took 6.218856ms waiting for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.396302  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.396307  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.525881  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.525914  369508 pod_ready.go:81] duration metric: took 129.596783ms waiting for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.525927  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.525935  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:01.926806  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-proxy-6tt7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.926843  369508 pod_ready.go:81] duration metric: took 400.889304ms waiting for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:01.926856  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-proxy-6tt7l" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:01.926864  369508 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:02.326588  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.326621  369508 pod_ready.go:81] duration metric: took 399.74816ms waiting for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:02.326633  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.326639  369508 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:02.727730  369508 pod_ready.go:97] node "embed-certs-915633" hosting pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.727759  369508 pod_ready.go:81] duration metric: took 401.108694ms waiting for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	E0229 02:32:02.727769  369508 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-915633" hosting pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:02.727776  369508 pod_ready.go:38] duration metric: took 1.354090438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:02.727795  369508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:32:02.742069  369508 ops.go:34] apiserver oom_adj: -16
	I0229 02:32:02.742097  369508 kubeadm.go:640] restartCluster took 18.823823408s
	I0229 02:32:02.742107  369508 kubeadm.go:406] StartCluster complete in 18.876382148s
	I0229 02:32:02.742127  369508 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:02.742271  369508 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:32:02.744032  369508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:32:02.744292  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:32:02.744429  369508 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:32:02.744507  369508 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-915633"
	I0229 02:32:02.744526  369508 addons.go:69] Setting default-storageclass=true in profile "embed-certs-915633"
	I0229 02:32:02.744540  369508 addons.go:69] Setting metrics-server=true in profile "embed-certs-915633"
	I0229 02:32:02.744550  369508 addons.go:234] Setting addon metrics-server=true in "embed-certs-915633"
	I0229 02:32:02.744555  369508 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-915633"
	W0229 02:32:02.744558  369508 addons.go:243] addon metrics-server should already be in state true
	I0229 02:32:02.744619  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.744532  369508 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-915633"
	W0229 02:32:02.744735  369508 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:32:02.744853  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.744682  369508 config.go:182] Loaded profile config "embed-certs-915633": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:32:02.745085  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745113  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.745121  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745175  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.745339  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.745416  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.749865  369508 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-915633" context rescaled to 1 replicas
	I0229 02:32:02.749907  369508 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.218 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:32:02.751823  369508 out.go:177] * Verifying Kubernetes components...
	I0229 02:32:02.753296  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:32:02.762688  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0229 02:32:02.763050  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0229 02:32:02.763274  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.763693  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.763872  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.763895  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.763963  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0229 02:32:02.764307  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.764337  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.764554  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.764592  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.764665  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.765103  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.765135  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.765144  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.765170  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.765481  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.765495  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.765863  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.766129  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.769253  369508 addons.go:234] Setting addon default-storageclass=true in "embed-certs-915633"
	W0229 02:32:02.769274  369508 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:32:02.769295  369508 host.go:66] Checking if "embed-certs-915633" exists ...
	I0229 02:32:02.769578  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.769607  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.787345  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35577
	I0229 02:32:02.787806  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.788243  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.788266  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.789755  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33629
	I0229 02:32:02.790272  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.790361  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0229 02:32:02.790634  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.790727  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.791027  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.791192  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.791206  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.791367  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.791402  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.791705  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.791924  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.792315  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.792987  369508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:32:02.793026  369508 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:32:02.793278  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.795128  369508 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:32:02.794105  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.796451  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:32:02.796472  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:32:02.796496  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.797812  369508 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:31:59.493919  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:01.989683  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:02.799249  369508 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:32:02.799270  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:32:02.799289  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.800109  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.800960  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.801015  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.801300  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.801496  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.801635  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.801763  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.802278  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.802617  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.802645  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.802836  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.803026  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.803174  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.803390  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.818656  369508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I0229 02:32:02.819105  369508 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:32:02.819606  369508 main.go:141] libmachine: Using API Version  1
	I0229 02:32:02.819625  369508 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:32:02.820022  369508 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:32:02.820366  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetState
	I0229 02:32:02.822054  369508 main.go:141] libmachine: (embed-certs-915633) Calling .DriverName
	I0229 02:32:02.822412  369508 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:32:02.822432  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:32:02.822451  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHHostname
	I0229 02:32:02.825579  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.826260  369508 main.go:141] libmachine: (embed-certs-915633) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:ca:ce", ip: ""} in network mk-embed-certs-915633: {Iface:virbr1 ExpiryTime:2024-02-29 03:31:29 +0000 UTC Type:0 Mac:52:54:00:26:ca:ce Iaid: IPaddr:192.168.50.218 Prefix:24 Hostname:embed-certs-915633 Clientid:01:52:54:00:26:ca:ce}
	I0229 02:32:02.826293  369508 main.go:141] libmachine: (embed-certs-915633) DBG | domain embed-certs-915633 has defined IP address 192.168.50.218 and MAC address 52:54:00:26:ca:ce in network mk-embed-certs-915633
	I0229 02:32:02.826463  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHPort
	I0229 02:32:02.826614  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHKeyPath
	I0229 02:32:02.826761  369508 main.go:141] libmachine: (embed-certs-915633) Calling .GetSSHUsername
	I0229 02:32:02.826954  369508 sshutil.go:53] new ssh client: &{IP:192.168.50.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/embed-certs-915633/id_rsa Username:docker}
	I0229 02:32:02.911316  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:32:02.945655  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:32:02.945683  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:32:02.981318  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:32:02.981352  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:32:02.983632  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:32:03.009561  369508 node_ready.go:35] waiting up to 6m0s for node "embed-certs-915633" to be "Ready" ...
	I0229 02:32:03.009586  369508 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 02:32:03.044265  369508 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:32:03.044293  369508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:32:03.094073  369508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:32:04.287008  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.3033415s)
	I0229 02:32:04.287081  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287094  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287375  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.37602435s)
	I0229 02:32:04.287416  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287428  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287440  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287463  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287478  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287487  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287750  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287800  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287828  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.287861  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.287805  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.287914  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.287834  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.287774  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.289370  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.289377  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.289397  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.293892  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.293919  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.294180  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.294198  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.294212  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.376595  369508 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.28244915s)
	I0229 02:32:04.376679  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.376710  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.377004  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.377022  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.377031  369508 main.go:141] libmachine: Making call to close driver server
	I0229 02:32:04.377039  369508 main.go:141] libmachine: (embed-certs-915633) Calling .Close
	I0229 02:32:04.377037  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.377275  369508 main.go:141] libmachine: (embed-certs-915633) DBG | Closing plugin on server side
	I0229 02:32:04.377319  369508 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:32:04.377331  369508 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:32:04.377348  369508 addons.go:470] Verifying addon metrics-server=true in "embed-certs-915633"
	I0229 02:32:04.380194  369508 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:32:04.381510  369508 addons.go:505] enable addons completed in 1.637082823s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:32:02.010578  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:04.509975  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:01.216197  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:01.716302  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.216170  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:02.715615  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.216580  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.716088  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.215743  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:04.716142  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.216543  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:05.715853  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:03.991440  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:05.992389  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:08.491225  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:05.014879  369508 node_ready.go:58] node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:07.518854  369508 node_ready.go:58] node "embed-certs-915633" has status "Ready":"False"
	I0229 02:32:07.009085  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:09.009296  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:06.216206  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:06.715748  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.215964  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:07.716419  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.216034  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:08.715611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.216207  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:09.716408  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.216144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.716454  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:10.491751  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:12.991326  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:10.013574  369508 node_ready.go:49] node "embed-certs-915633" has status "Ready":"True"
	I0229 02:32:10.013605  369508 node_ready.go:38] duration metric: took 7.004009102s waiting for node "embed-certs-915633" to be "Ready" ...
	I0229 02:32:10.013617  369508 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:32:10.020332  369508 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.025740  369508 pod_ready.go:92] pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:10.025766  369508 pod_ready.go:81] duration metric: took 5.403764ms waiting for pod "coredns-5dd5756b68-kt28m" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.025778  369508 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.534182  369508 pod_ready.go:92] pod "etcd-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:10.534212  369508 pod_ready.go:81] duration metric: took 508.426559ms waiting for pod "etcd-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:10.534238  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:11.048997  369508 pod_ready.go:92] pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:11.049027  369508 pod_ready.go:81] duration metric: took 514.780048ms waiting for pod "kube-apiserver-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:11.049040  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:13.056477  369508 pod_ready.go:102] pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:11.010305  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:13.011477  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:11.215611  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:11.716198  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.216332  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:12.716413  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.216407  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:13.716466  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.216182  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:14.716285  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.215995  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.715613  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:15.491511  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:17.494485  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:15.056064  369508 pod_ready.go:92] pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.056093  369508 pod_ready.go:81] duration metric: took 4.007044542s waiting for pod "kube-controller-manager-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.056104  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.061418  369508 pod_ready.go:92] pod "kube-proxy-6tt7l" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.061440  369508 pod_ready.go:81] duration metric: took 5.329971ms waiting for pod "kube-proxy-6tt7l" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.061451  369508 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.578305  369508 pod_ready.go:92] pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace has status "Ready":"True"
	I0229 02:32:15.578332  369508 pod_ready.go:81] duration metric: took 516.873281ms waiting for pod "kube-scheduler-embed-certs-915633" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:15.578341  369508 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	I0229 02:32:17.585624  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:19.586470  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:15.510630  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:18.010381  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:16.215530  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:16.716420  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.216031  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:17.716303  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.216082  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:18.715523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.216166  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.716503  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.215680  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:20.715770  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:19.989766  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.989821  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.586820  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:23.587119  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:20.509895  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:23.010371  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:21.215523  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:21.715617  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.216133  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:22.716029  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.216141  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.715578  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.215640  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:24.715601  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.215959  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:25.716394  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:23.990493  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:25.990911  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:28.489681  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:26.085933  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:28.086754  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:25.508765  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:27.508956  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:29.512409  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:26.215946  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:26.715834  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.216243  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:27.715581  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.215521  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:28.715849  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.215560  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:29.716497  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.215657  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.715492  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:30.490400  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:32.990250  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:30.586107  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:33.086852  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:31.518170  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:34.009514  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:31.216322  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:31.716160  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.215557  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:32.715618  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.215761  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:33.716216  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.216460  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.716244  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.215551  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:35.715633  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:34.990305  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.990956  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:35.585472  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:37.586652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.509112  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:38.509634  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:36.215910  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:36.716307  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:37.216308  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:37.216404  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:37.262324  370051 cri.go:89] found id: ""
	I0229 02:32:37.262358  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.262370  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:37.262378  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:37.262442  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:37.303758  370051 cri.go:89] found id: ""
	I0229 02:32:37.303790  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.303802  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:37.303809  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:37.303880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:37.349512  370051 cri.go:89] found id: ""
	I0229 02:32:37.349538  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.349546  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:37.349553  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:37.349607  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:37.389630  370051 cri.go:89] found id: ""
	I0229 02:32:37.389657  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.389668  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:37.389676  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:37.389752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:37.435918  370051 cri.go:89] found id: ""
	I0229 02:32:37.435954  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.435967  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:37.435976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:37.436044  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:37.479336  370051 cri.go:89] found id: ""
	I0229 02:32:37.479369  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.479377  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:37.479384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:37.479460  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:37.519944  370051 cri.go:89] found id: ""
	I0229 02:32:37.519979  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.519991  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:37.519999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:37.520071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:37.563848  370051 cri.go:89] found id: ""
	I0229 02:32:37.563875  370051 logs.go:276] 0 containers: []
	W0229 02:32:37.563884  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:37.563895  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:37.563915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:37.607989  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:37.608025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:37.660272  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:37.660324  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:37.676878  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:37.676909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:37.805099  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:37.805132  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:37.805159  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.378467  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:40.393066  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:40.393221  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:40.432592  370051 cri.go:89] found id: ""
	I0229 02:32:40.432619  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.432628  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:40.432634  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:40.432693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:40.473651  370051 cri.go:89] found id: ""
	I0229 02:32:40.473706  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.473716  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:40.473722  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:40.473781  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:40.520262  370051 cri.go:89] found id: ""
	I0229 02:32:40.520292  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.520303  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:40.520312  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:40.520374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:40.560359  370051 cri.go:89] found id: ""
	I0229 02:32:40.560393  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.560402  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:40.560408  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:40.560474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:40.602145  370051 cri.go:89] found id: ""
	I0229 02:32:40.602173  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.602181  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:40.602187  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:40.602266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:40.640744  370051 cri.go:89] found id: ""
	I0229 02:32:40.640778  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.640791  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:40.640799  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:40.640869  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:40.681863  370051 cri.go:89] found id: ""
	I0229 02:32:40.681895  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.681908  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:40.681916  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:40.681985  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:40.725859  370051 cri.go:89] found id: ""
	I0229 02:32:40.725890  370051 logs.go:276] 0 containers: []
	W0229 02:32:40.725899  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:40.725910  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:40.725924  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:40.794666  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:40.794705  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:40.854173  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:40.854215  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:40.901744  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:40.901786  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:40.925331  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:40.925371  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:41.005785  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:39.491292  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:41.494077  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:40.086540  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:42.584644  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:44.587012  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:41.010764  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:43.510128  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:43.506756  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:43.522038  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:43.522135  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:43.559609  370051 cri.go:89] found id: ""
	I0229 02:32:43.559635  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.559642  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:43.559649  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:43.559707  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:43.609059  370051 cri.go:89] found id: ""
	I0229 02:32:43.609087  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.609096  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:43.609102  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:43.609159  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:43.648988  370051 cri.go:89] found id: ""
	I0229 02:32:43.649018  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.649029  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:43.649037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:43.649104  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:43.690995  370051 cri.go:89] found id: ""
	I0229 02:32:43.691028  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.691042  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:43.691054  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:43.691120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:43.729221  370051 cri.go:89] found id: ""
	I0229 02:32:43.729249  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.729257  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:43.729263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:43.729334  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:43.767141  370051 cri.go:89] found id: ""
	I0229 02:32:43.767174  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.767186  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:43.767194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:43.767266  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:43.807926  370051 cri.go:89] found id: ""
	I0229 02:32:43.807962  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.807970  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:43.807976  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:43.808029  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:43.857945  370051 cri.go:89] found id: ""
	I0229 02:32:43.857973  370051 logs.go:276] 0 containers: []
	W0229 02:32:43.857981  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:43.857991  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:43.858005  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:43.941290  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:43.941338  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:43.986788  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:43.986823  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:44.037384  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:44.037421  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:44.052668  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:44.052696  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:44.127124  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:43.990179  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:45.990921  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:47.991525  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:47.086821  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:49.585987  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:45.510273  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:48.009067  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:50.011776  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:46.627409  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:46.642306  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:46.642397  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:46.685358  370051 cri.go:89] found id: ""
	I0229 02:32:46.685389  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.685400  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:46.685431  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:46.685493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:46.724996  370051 cri.go:89] found id: ""
	I0229 02:32:46.725026  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.725035  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:46.725041  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:46.725113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:46.765815  370051 cri.go:89] found id: ""
	I0229 02:32:46.765849  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.765857  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:46.765863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:46.765924  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:46.808946  370051 cri.go:89] found id: ""
	I0229 02:32:46.808980  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.808991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:46.809000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:46.809068  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:46.865068  370051 cri.go:89] found id: ""
	I0229 02:32:46.865106  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.865119  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:46.865127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:46.865200  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:46.932233  370051 cri.go:89] found id: ""
	I0229 02:32:46.932260  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.932268  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:46.932275  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:46.932331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:46.985701  370051 cri.go:89] found id: ""
	I0229 02:32:46.985732  370051 logs.go:276] 0 containers: []
	W0229 02:32:46.985744  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:46.985752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:46.985819  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:47.027497  370051 cri.go:89] found id: ""
	I0229 02:32:47.027524  370051 logs.go:276] 0 containers: []
	W0229 02:32:47.027536  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:47.027548  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:47.027565  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:47.075955  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:47.075990  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:47.093922  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:47.093949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:47.165000  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:47.165029  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:47.165046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:47.250161  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:47.250201  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:49.794654  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:49.809706  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:49.809787  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:49.868163  370051 cri.go:89] found id: ""
	I0229 02:32:49.868197  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.868217  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:49.868223  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:49.868277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:49.928462  370051 cri.go:89] found id: ""
	I0229 02:32:49.928495  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.928508  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:49.928516  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:49.928580  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:49.975725  370051 cri.go:89] found id: ""
	I0229 02:32:49.975755  370051 logs.go:276] 0 containers: []
	W0229 02:32:49.975765  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:49.975774  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:49.975849  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:50.017007  370051 cri.go:89] found id: ""
	I0229 02:32:50.017036  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.017046  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:50.017051  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:50.017118  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:50.054522  370051 cri.go:89] found id: ""
	I0229 02:32:50.054551  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.054560  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:50.054566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:50.054620  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:50.096274  370051 cri.go:89] found id: ""
	I0229 02:32:50.096300  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.096308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:50.096319  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:50.096382  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:50.142543  370051 cri.go:89] found id: ""
	I0229 02:32:50.142581  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.142590  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:50.142597  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:50.142667  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:50.182452  370051 cri.go:89] found id: ""
	I0229 02:32:50.182482  370051 logs.go:276] 0 containers: []
	W0229 02:32:50.182492  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:50.182505  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:50.182522  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:50.266311  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:50.266355  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:50.309277  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:50.309322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:50.360492  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:50.360536  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:50.376711  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:50.376744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:50.447128  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:49.992032  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.490801  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:51.586053  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:53.586268  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.510054  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:54.510975  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:52.947926  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:52.970209  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:52.970317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:53.010840  370051 cri.go:89] found id: ""
	I0229 02:32:53.010868  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.010878  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:53.010886  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:53.010983  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:53.049458  370051 cri.go:89] found id: ""
	I0229 02:32:53.049490  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.049503  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:53.049511  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:53.049578  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:53.088615  370051 cri.go:89] found id: ""
	I0229 02:32:53.088646  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.088656  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:53.088671  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:53.088738  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:53.130176  370051 cri.go:89] found id: ""
	I0229 02:32:53.130210  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.130237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:53.130247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:53.130317  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:53.177876  370051 cri.go:89] found id: ""
	I0229 02:32:53.177908  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.177920  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:53.177928  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:53.177991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:53.216036  370051 cri.go:89] found id: ""
	I0229 02:32:53.216065  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.216074  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:53.216080  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:53.216143  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:53.254673  370051 cri.go:89] found id: ""
	I0229 02:32:53.254705  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.254716  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:53.254724  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:53.254785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:53.291508  370051 cri.go:89] found id: ""
	I0229 02:32:53.291539  370051 logs.go:276] 0 containers: []
	W0229 02:32:53.291551  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:53.291564  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:53.291581  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:53.343312  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:53.343354  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:53.359264  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:53.359294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:53.431396  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:53.431428  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:53.431445  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:53.512494  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:53.512529  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:56.057340  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:56.073074  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:56.073158  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:56.111650  370051 cri.go:89] found id: ""
	I0229 02:32:56.111684  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.111704  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:56.111713  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:56.111785  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:54.990490  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:56.991005  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:55.587290  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:58.086312  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:57.008288  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:59.011396  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:32:56.150147  370051 cri.go:89] found id: ""
	I0229 02:32:56.150178  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.150191  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:56.150200  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:56.150280  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:56.192842  370051 cri.go:89] found id: ""
	I0229 02:32:56.192878  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.192890  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:56.192898  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:56.192969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:56.232013  370051 cri.go:89] found id: ""
	I0229 02:32:56.232051  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.232062  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:56.232079  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:56.232151  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:56.273824  370051 cri.go:89] found id: ""
	I0229 02:32:56.273858  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.273871  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:56.273882  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:56.273949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:56.312112  370051 cri.go:89] found id: ""
	I0229 02:32:56.312139  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.312147  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:56.312153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:56.312203  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:56.352558  370051 cri.go:89] found id: ""
	I0229 02:32:56.352585  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.352593  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:56.352600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:56.352666  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:56.397719  370051 cri.go:89] found id: ""
	I0229 02:32:56.397762  370051 logs.go:276] 0 containers: []
	W0229 02:32:56.397775  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:56.397790  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:56.397808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:56.447793  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:56.447831  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:56.463859  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:56.463894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:56.540306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:56.540333  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:56.540347  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:56.633201  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:56.633247  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.207459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:32:59.222165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:32:59.222271  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:32:59.261197  370051 cri.go:89] found id: ""
	I0229 02:32:59.261230  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.261242  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:32:59.261251  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:32:59.261338  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:32:59.300874  370051 cri.go:89] found id: ""
	I0229 02:32:59.300917  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.300940  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:32:59.300950  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:32:59.301025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:32:59.345399  370051 cri.go:89] found id: ""
	I0229 02:32:59.345435  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.345446  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:32:59.345455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:32:59.345525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:32:59.386068  370051 cri.go:89] found id: ""
	I0229 02:32:59.386102  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.386112  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:32:59.386132  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:32:59.386184  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:32:59.436597  370051 cri.go:89] found id: ""
	I0229 02:32:59.436629  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.436641  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:32:59.436649  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:32:59.436708  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:32:59.481417  370051 cri.go:89] found id: ""
	I0229 02:32:59.481446  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.481462  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:32:59.481469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:32:59.481535  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:32:59.527725  370051 cri.go:89] found id: ""
	I0229 02:32:59.527752  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.527763  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:32:59.527771  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:32:59.527845  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:32:59.574502  370051 cri.go:89] found id: ""
	I0229 02:32:59.574535  370051 logs.go:276] 0 containers: []
	W0229 02:32:59.574547  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:32:59.574561  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:32:59.574579  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:32:59.669584  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:32:59.669630  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:32:59.730049  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:32:59.730096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:32:59.779562  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:32:59.779613  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:32:59.797016  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:32:59.797046  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:32:59.876438  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:32:58.991584  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:01.489321  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:03.489615  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:00.585463  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:02.587986  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:04.588479  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:01.509980  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:04.009579  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:02.377144  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:02.391585  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:02.391682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:02.432359  370051 cri.go:89] found id: ""
	I0229 02:33:02.432390  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.432399  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:02.432406  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:02.432462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:02.476733  370051 cri.go:89] found id: ""
	I0229 02:33:02.476768  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.476781  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:02.476790  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:02.476856  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:02.521414  370051 cri.go:89] found id: ""
	I0229 02:33:02.521440  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.521448  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:02.521454  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:02.521513  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:02.561663  370051 cri.go:89] found id: ""
	I0229 02:33:02.561690  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.561698  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:02.561704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:02.561755  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:02.611953  370051 cri.go:89] found id: ""
	I0229 02:33:02.611989  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.612002  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:02.612010  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:02.612079  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:02.663254  370051 cri.go:89] found id: ""
	I0229 02:33:02.663282  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.663290  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:02.663297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:02.663348  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:02.721449  370051 cri.go:89] found id: ""
	I0229 02:33:02.721484  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.721497  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:02.721506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:02.721579  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:02.761197  370051 cri.go:89] found id: ""
	I0229 02:33:02.761239  370051 logs.go:276] 0 containers: []
	W0229 02:33:02.761251  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:02.761265  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:02.761282  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:02.810457  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:02.810498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:02.828906  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:02.828940  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:02.911895  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:02.911932  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:02.911945  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:02.995120  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:02.995152  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:05.544629  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:05.559266  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:05.559342  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:05.609673  370051 cri.go:89] found id: ""
	I0229 02:33:05.609706  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.609718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:05.609727  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:05.609795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:05.665161  370051 cri.go:89] found id: ""
	I0229 02:33:05.665192  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.665203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:05.665211  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:05.665282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:05.719923  370051 cri.go:89] found id: ""
	I0229 02:33:05.719949  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.719957  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:05.719963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:05.720025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:05.765189  370051 cri.go:89] found id: ""
	I0229 02:33:05.765224  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.765237  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:05.765245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:05.765357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:05.803788  370051 cri.go:89] found id: ""
	I0229 02:33:05.803820  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.803829  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:05.803836  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:05.803909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:05.842152  370051 cri.go:89] found id: ""
	I0229 02:33:05.842178  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.842188  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:05.842197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:05.842278  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:05.885042  370051 cri.go:89] found id: ""
	I0229 02:33:05.885071  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.885084  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:05.885092  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:05.885156  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:05.926032  370051 cri.go:89] found id: ""
	I0229 02:33:05.926069  370051 logs.go:276] 0 containers: []
	W0229 02:33:05.926082  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:05.926096  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:05.926112  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:06.014702  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:06.014744  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:06.063510  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:06.063550  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:06.114215  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:06.114272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:06.130132  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:06.130169  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:33:05.490726  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:07.491068  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:07.085225  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:09.087524  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:06.508469  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:08.509399  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	W0229 02:33:06.205692  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:08.706549  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:08.722548  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:08.722614  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:08.768518  370051 cri.go:89] found id: ""
	I0229 02:33:08.768553  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.768564  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:08.768572  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:08.768630  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:08.804600  370051 cri.go:89] found id: ""
	I0229 02:33:08.804630  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.804643  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:08.804651  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:08.804721  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:08.842466  370051 cri.go:89] found id: ""
	I0229 02:33:08.842497  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.842510  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:08.842518  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:08.842589  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:08.878384  370051 cri.go:89] found id: ""
	I0229 02:33:08.878412  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.878421  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:08.878427  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:08.878484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:08.924228  370051 cri.go:89] found id: ""
	I0229 02:33:08.924262  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.924275  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:08.924295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:08.924374  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:08.966122  370051 cri.go:89] found id: ""
	I0229 02:33:08.966157  370051 logs.go:276] 0 containers: []
	W0229 02:33:08.966168  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:08.966177  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:08.966254  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:09.011109  370051 cri.go:89] found id: ""
	I0229 02:33:09.011135  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.011144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:09.011152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:09.011217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:09.059716  370051 cri.go:89] found id: ""
	I0229 02:33:09.059749  370051 logs.go:276] 0 containers: []
	W0229 02:33:09.059782  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:09.059795  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:09.059812  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:09.110564  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:09.110599  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:09.126037  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:09.126065  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:09.199827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:09.199858  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:09.199892  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:09.282624  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:09.282661  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:09.990502  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.991783  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.586475  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:13.586740  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:10.511051  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:12.512644  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:15.009478  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:11.829017  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:11.842826  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:11.842894  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:11.881652  370051 cri.go:89] found id: ""
	I0229 02:33:11.881689  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.881700  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:11.881709  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:11.881773  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:11.919252  370051 cri.go:89] found id: ""
	I0229 02:33:11.919291  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.919302  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:11.919309  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:11.919380  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:11.959145  370051 cri.go:89] found id: ""
	I0229 02:33:11.959175  370051 logs.go:276] 0 containers: []
	W0229 02:33:11.959187  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:11.959196  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:11.959263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:12.002105  370051 cri.go:89] found id: ""
	I0229 02:33:12.002134  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.002145  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:12.002153  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:12.002219  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:12.042157  370051 cri.go:89] found id: ""
	I0229 02:33:12.042188  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.042221  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:12.042249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:12.042326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:12.080121  370051 cri.go:89] found id: ""
	I0229 02:33:12.080150  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.080158  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:12.080165  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:12.080231  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:12.119259  370051 cri.go:89] found id: ""
	I0229 02:33:12.119286  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.119294  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:12.119301  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:12.119357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:12.160136  370051 cri.go:89] found id: ""
	I0229 02:33:12.160171  370051 logs.go:276] 0 containers: []
	W0229 02:33:12.160182  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:12.160195  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:12.160209  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:12.209770  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:12.209810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:12.226429  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:12.226460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:12.295933  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:12.295966  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:12.295978  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:12.380794  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:12.380843  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:14.971692  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:14.986085  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:14.986162  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:15.024756  370051 cri.go:89] found id: ""
	I0229 02:33:15.024788  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.024801  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:15.024809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:15.024868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:15.065131  370051 cri.go:89] found id: ""
	I0229 02:33:15.065159  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.065172  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:15.065180  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:15.065251  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:15.104744  370051 cri.go:89] found id: ""
	I0229 02:33:15.104775  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.104786  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:15.104794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:15.104858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:15.145710  370051 cri.go:89] found id: ""
	I0229 02:33:15.145737  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.145745  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:15.145752  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:15.145803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:15.184908  370051 cri.go:89] found id: ""
	I0229 02:33:15.184933  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.184942  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:15.184951  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:15.185016  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:15.230195  370051 cri.go:89] found id: ""
	I0229 02:33:15.230220  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.230241  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:15.230249  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:15.230326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:15.269750  370051 cri.go:89] found id: ""
	I0229 02:33:15.269774  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.269783  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:15.269789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:15.269852  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:15.312331  370051 cri.go:89] found id: ""
	I0229 02:33:15.312360  370051 logs.go:276] 0 containers: []
	W0229 02:33:15.312373  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:15.312387  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:15.312402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:15.363032  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:15.363067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:15.422421  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:15.422463  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:15.445235  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:15.445272  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:15.530010  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:15.530047  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:15.530066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:14.489188  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:16.991028  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:16.090733  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:18.587045  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:17.510766  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:20.009379  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:18.116265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:18.130375  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:18.130439  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:18.167740  370051 cri.go:89] found id: ""
	I0229 02:33:18.167767  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.167776  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:18.167782  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:18.167843  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:18.205621  370051 cri.go:89] found id: ""
	I0229 02:33:18.205653  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.205662  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:18.205670  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:18.205725  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:18.246917  370051 cri.go:89] found id: ""
	I0229 02:33:18.246954  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.246975  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:18.246983  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:18.247040  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:18.285087  370051 cri.go:89] found id: ""
	I0229 02:33:18.285114  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.285123  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:18.285130  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:18.285181  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:18.323989  370051 cri.go:89] found id: ""
	I0229 02:33:18.324018  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.324027  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:18.324033  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:18.324094  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:18.372741  370051 cri.go:89] found id: ""
	I0229 02:33:18.372769  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.372779  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:18.372785  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:18.372838  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:18.432846  370051 cri.go:89] found id: ""
	I0229 02:33:18.432888  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.432900  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:18.432908  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:18.432977  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:18.486357  370051 cri.go:89] found id: ""
	I0229 02:33:18.486387  370051 logs.go:276] 0 containers: []
	W0229 02:33:18.486399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:18.486411  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:18.486431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:18.532363  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:18.532402  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:18.582035  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:18.582076  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:18.599009  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:18.599050  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:18.673580  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:18.673609  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:18.673625  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:19.490704  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.990251  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.085541  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:23.086148  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:22.009826  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:24.509388  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:21.259614  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:21.274150  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:21.274250  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:21.311859  370051 cri.go:89] found id: ""
	I0229 02:33:21.311895  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.311908  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:21.311917  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:21.311984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:21.364260  370051 cri.go:89] found id: ""
	I0229 02:33:21.364296  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.364309  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:21.364317  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:21.364391  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:21.424181  370051 cri.go:89] found id: ""
	I0229 02:33:21.424217  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.424229  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:21.424237  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:21.424306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:21.482499  370051 cri.go:89] found id: ""
	I0229 02:33:21.482531  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.482543  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:21.482551  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:21.482621  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:21.523743  370051 cri.go:89] found id: ""
	I0229 02:33:21.523775  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.523785  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:21.523793  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:21.523868  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:21.563759  370051 cri.go:89] found id: ""
	I0229 02:33:21.563789  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.563800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:21.563809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:21.563889  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:21.610162  370051 cri.go:89] found id: ""
	I0229 02:33:21.610265  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.610286  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:21.610295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:21.610378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:21.652001  370051 cri.go:89] found id: ""
	I0229 02:33:21.652028  370051 logs.go:276] 0 containers: []
	W0229 02:33:21.652037  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:21.652047  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:21.652060  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:21.704028  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:21.704067  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:21.720924  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:21.720956  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:21.798619  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:21.798645  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:21.798664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:21.888445  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:21.888506  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.437647  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:24.459963  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:24.460041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:24.503906  370051 cri.go:89] found id: ""
	I0229 02:33:24.503940  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.503950  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:24.503956  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:24.504031  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:24.541893  370051 cri.go:89] found id: ""
	I0229 02:33:24.541919  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.541929  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:24.541935  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:24.541991  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:24.584717  370051 cri.go:89] found id: ""
	I0229 02:33:24.584748  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.584760  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:24.584769  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:24.584836  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:24.623334  370051 cri.go:89] found id: ""
	I0229 02:33:24.623362  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.623371  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:24.623378  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:24.623447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:24.665862  370051 cri.go:89] found id: ""
	I0229 02:33:24.665890  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.665902  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:24.665911  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:24.665984  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:24.705509  370051 cri.go:89] found id: ""
	I0229 02:33:24.705540  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.705551  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:24.705560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:24.705634  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:24.745348  370051 cri.go:89] found id: ""
	I0229 02:33:24.745389  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.745399  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:24.745406  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:24.745462  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:24.785490  370051 cri.go:89] found id: ""
	I0229 02:33:24.785520  370051 logs.go:276] 0 containers: []
	W0229 02:33:24.785529  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:24.785539  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:24.785553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:24.829556  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:24.829589  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:24.877914  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:24.877949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:24.894590  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:24.894623  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:24.972948  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:24.972981  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:24.972997  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:23.990806  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:26.489823  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:25.586684  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:27.588321  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:26.509932  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:29.010692  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:27.555364  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:27.570747  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:27.570820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:27.609771  370051 cri.go:89] found id: ""
	I0229 02:33:27.609800  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.609807  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:27.609813  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:27.609863  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:27.654316  370051 cri.go:89] found id: ""
	I0229 02:33:27.654347  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.654360  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:27.654376  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:27.654453  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:27.695089  370051 cri.go:89] found id: ""
	I0229 02:33:27.695125  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.695137  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:27.695143  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:27.695199  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:27.733846  370051 cri.go:89] found id: ""
	I0229 02:33:27.733881  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.733893  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:27.733901  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:27.733972  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:27.772906  370051 cri.go:89] found id: ""
	I0229 02:33:27.772940  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.772953  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:27.772961  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:27.773039  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:27.812266  370051 cri.go:89] found id: ""
	I0229 02:33:27.812295  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.812308  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:27.812316  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:27.812387  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:27.849272  370051 cri.go:89] found id: ""
	I0229 02:33:27.849305  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.849316  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:27.849324  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:27.849393  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:27.887495  370051 cri.go:89] found id: ""
	I0229 02:33:27.887528  370051 logs.go:276] 0 containers: []
	W0229 02:33:27.887541  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:27.887554  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:27.887569  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:27.972220  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:27.972261  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:28.020757  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:28.020797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:28.070347  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:28.070381  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:28.089905  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:28.089947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:28.183306  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:30.683857  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:30.701341  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:30.701443  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:30.741342  370051 cri.go:89] found id: ""
	I0229 02:33:30.741376  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.741387  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:30.741397  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:30.741475  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:30.785372  370051 cri.go:89] found id: ""
	I0229 02:33:30.785415  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.785427  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:30.785435  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:30.785506  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:30.828402  370051 cri.go:89] found id: ""
	I0229 02:33:30.828428  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.828436  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:30.828442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:30.828504  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:30.872656  370051 cri.go:89] found id: ""
	I0229 02:33:30.872684  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.872695  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:30.872704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:30.872770  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:30.918746  370051 cri.go:89] found id: ""
	I0229 02:33:30.918775  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.918786  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:30.918794  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:30.918867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:30.956794  370051 cri.go:89] found id: ""
	I0229 02:33:30.956838  370051 logs.go:276] 0 containers: []
	W0229 02:33:30.956852  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:30.956860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:30.956935  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:31.000595  370051 cri.go:89] found id: ""
	I0229 02:33:31.000618  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.000628  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:31.000637  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:31.000699  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:31.039060  370051 cri.go:89] found id: ""
	I0229 02:33:31.039089  370051 logs.go:276] 0 containers: []
	W0229 02:33:31.039100  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:31.039111  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:31.039133  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:31.089919  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:31.089949  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:31.110276  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:31.110315  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:33:28.990807  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:30.993882  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:33.489703  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:30.086658  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:32.586407  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:34.588272  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:31.509534  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:33.511710  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	W0229 02:33:31.235760  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:31.235791  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:31.235810  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:31.323257  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:31.323322  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:33.872956  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:33.887953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:33.888034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:33.927887  370051 cri.go:89] found id: ""
	I0229 02:33:33.927926  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.927938  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:33.927945  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:33.928001  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:33.967301  370051 cri.go:89] found id: ""
	I0229 02:33:33.967333  370051 logs.go:276] 0 containers: []
	W0229 02:33:33.967345  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:33.967356  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:33.967425  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:34.009949  370051 cri.go:89] found id: ""
	I0229 02:33:34.009982  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.009992  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:34.009999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:34.010073  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:34.056197  370051 cri.go:89] found id: ""
	I0229 02:33:34.056224  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.056232  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:34.056239  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:34.056314  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:34.107089  370051 cri.go:89] found id: ""
	I0229 02:33:34.107120  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.107132  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:34.107140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:34.107206  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:34.162822  370051 cri.go:89] found id: ""
	I0229 02:33:34.162856  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.162875  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:34.162884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:34.162961  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:34.209963  370051 cri.go:89] found id: ""
	I0229 02:33:34.209993  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.210001  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:34.210008  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:34.210078  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:34.250688  370051 cri.go:89] found id: ""
	I0229 02:33:34.250726  370051 logs.go:276] 0 containers: []
	W0229 02:33:34.250735  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:34.250754  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:34.250768  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:34.298953  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:34.298993  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:34.314067  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:34.314100  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:34.393515  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:34.393536  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:34.393551  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:34.477034  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:34.477078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:35.990175  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:38.490651  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:37.087261  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:39.588400  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:36.009933  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:38.508929  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:37.025152  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:37.040410  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:37.040491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:37.077922  370051 cri.go:89] found id: ""
	I0229 02:33:37.077953  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.077965  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:37.077973  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:37.078041  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:37.137895  370051 cri.go:89] found id: ""
	I0229 02:33:37.137925  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.137938  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:37.137946  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:37.138012  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:37.199291  370051 cri.go:89] found id: ""
	I0229 02:33:37.199324  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.199336  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:37.199344  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:37.199422  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:37.242817  370051 cri.go:89] found id: ""
	I0229 02:33:37.242848  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.242857  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:37.242863  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:37.242917  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:37.282171  370051 cri.go:89] found id: ""
	I0229 02:33:37.282196  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.282204  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:37.282211  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:37.282284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:37.328608  370051 cri.go:89] found id: ""
	I0229 02:33:37.328639  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.328647  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:37.328658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:37.328724  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:37.372965  370051 cri.go:89] found id: ""
	I0229 02:33:37.372996  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.373008  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:37.373016  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:37.373091  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:37.417597  370051 cri.go:89] found id: ""
	I0229 02:33:37.417630  370051 logs.go:276] 0 containers: []
	W0229 02:33:37.417642  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:37.417655  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:37.417673  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:37.472023  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:37.472058  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:37.487931  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:37.487961  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:37.568196  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:37.568227  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:37.568245  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:37.658485  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:37.658523  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.203039  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:40.220385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:40.220477  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:40.262962  370051 cri.go:89] found id: ""
	I0229 02:33:40.262993  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.263004  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:40.263016  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:40.263086  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:40.302452  370051 cri.go:89] found id: ""
	I0229 02:33:40.302483  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.302495  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:40.302503  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:40.302560  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:40.342509  370051 cri.go:89] found id: ""
	I0229 02:33:40.342544  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.342557  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:40.342566  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:40.342644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:40.385585  370051 cri.go:89] found id: ""
	I0229 02:33:40.385615  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.385629  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:40.385638  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:40.385703  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:40.426839  370051 cri.go:89] found id: ""
	I0229 02:33:40.426874  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.426887  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:40.426896  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:40.426962  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:40.467217  370051 cri.go:89] found id: ""
	I0229 02:33:40.467241  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.467251  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:40.467257  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:40.467332  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:40.513525  370051 cri.go:89] found id: ""
	I0229 02:33:40.513546  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.513553  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:40.513559  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:40.513609  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:40.554187  370051 cri.go:89] found id: ""
	I0229 02:33:40.554256  370051 logs.go:276] 0 containers: []
	W0229 02:33:40.554269  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:40.554282  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:40.554301  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:40.636447  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:40.636477  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:40.636494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:40.716381  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:40.716423  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:40.761946  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:40.761982  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:40.812828  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:40.812862  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:40.492178  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.991517  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.086413  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:44.586663  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:40.510266  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:42.510702  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:45.013362  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:43.336139  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:43.352278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:43.352361  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:43.392555  370051 cri.go:89] found id: ""
	I0229 02:33:43.392593  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.392607  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:43.392616  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:43.392689  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:43.438169  370051 cri.go:89] found id: ""
	I0229 02:33:43.438202  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.438216  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:43.438242  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:43.438331  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:43.476987  370051 cri.go:89] found id: ""
	I0229 02:33:43.477021  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.477033  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:43.477042  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:43.477109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:43.526728  370051 cri.go:89] found id: ""
	I0229 02:33:43.526758  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.526767  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:43.526778  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:43.526833  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:43.572222  370051 cri.go:89] found id: ""
	I0229 02:33:43.572260  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.572273  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:43.572282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:43.572372  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:43.618650  370051 cri.go:89] found id: ""
	I0229 02:33:43.618679  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.618691  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:43.618698  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:43.618764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:43.658069  370051 cri.go:89] found id: ""
	I0229 02:33:43.658104  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.658116  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:43.658126  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:43.658196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:43.700790  370051 cri.go:89] found id: ""
	I0229 02:33:43.700829  370051 logs.go:276] 0 containers: []
	W0229 02:33:43.700841  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:43.700855  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:43.700874  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:43.753330  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:43.753372  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:43.770261  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:43.770294  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:43.842407  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:43.842430  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:43.842447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:43.935427  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:43.935470  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:45.490296  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.490514  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.088903  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:49.585902  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:47.510105  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:49.511420  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:46.498694  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:46.516463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:46.516541  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:46.554731  370051 cri.go:89] found id: ""
	I0229 02:33:46.554757  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.554766  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:46.554772  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:46.554835  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:46.596851  370051 cri.go:89] found id: ""
	I0229 02:33:46.596892  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.596905  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:46.596912  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:46.596981  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:46.634978  370051 cri.go:89] found id: ""
	I0229 02:33:46.635008  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.635017  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:46.635024  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:46.635089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:46.675302  370051 cri.go:89] found id: ""
	I0229 02:33:46.675334  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.675347  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:46.675355  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:46.675423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:46.717366  370051 cri.go:89] found id: ""
	I0229 02:33:46.717402  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.717413  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:46.717421  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:46.717484  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:46.756130  370051 cri.go:89] found id: ""
	I0229 02:33:46.756160  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.756169  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:46.756176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:46.756228  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:46.794283  370051 cri.go:89] found id: ""
	I0229 02:33:46.794312  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.794320  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:46.794328  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:46.794384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:46.836646  370051 cri.go:89] found id: ""
	I0229 02:33:46.836679  370051 logs.go:276] 0 containers: []
	W0229 02:33:46.836691  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:46.836703  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:46.836721  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:46.926532  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:46.926578  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:46.981883  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:46.981915  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:47.033571  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:47.033612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:47.049803  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:47.049833  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:47.123389  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:49.623827  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:49.638175  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:49.638263  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:49.675895  370051 cri.go:89] found id: ""
	I0229 02:33:49.675929  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.675941  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:49.675950  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:49.676009  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:49.720679  370051 cri.go:89] found id: ""
	I0229 02:33:49.720718  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.720730  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:49.720739  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:49.720808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:49.762299  370051 cri.go:89] found id: ""
	I0229 02:33:49.762329  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.762342  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:49.762350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:49.762426  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:49.809330  370051 cri.go:89] found id: ""
	I0229 02:33:49.809364  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.809376  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:49.809391  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:49.809455  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:49.859176  370051 cri.go:89] found id: ""
	I0229 02:33:49.859206  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.859218  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:49.859226  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:49.859292  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:49.914844  370051 cri.go:89] found id: ""
	I0229 02:33:49.914877  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.914890  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:49.914897  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:49.914967  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:49.969640  370051 cri.go:89] found id: ""
	I0229 02:33:49.969667  370051 logs.go:276] 0 containers: []
	W0229 02:33:49.969676  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:49.969682  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:49.969736  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:50.010924  370051 cri.go:89] found id: ""
	I0229 02:33:50.010953  370051 logs.go:276] 0 containers: []
	W0229 02:33:50.010965  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:50.010976  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:50.011002  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:50.089462  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:50.089494  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:50.132098  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:50.132129  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:50.182693  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:50.182737  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:50.198209  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:50.198256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:50.281521  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:49.991831  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:52.489891  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:51.586298  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:53.587249  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:51.513176  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:54.010209  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:52.781677  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:52.795962  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:52.796055  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:52.833670  370051 cri.go:89] found id: ""
	I0229 02:33:52.833706  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.833718  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:52.833728  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:52.833795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:52.889497  370051 cri.go:89] found id: ""
	I0229 02:33:52.889529  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.889539  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:52.889547  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:52.889616  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:52.952880  370051 cri.go:89] found id: ""
	I0229 02:33:52.952915  370051 logs.go:276] 0 containers: []
	W0229 02:33:52.952927  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:52.952935  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:52.953002  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:53.008380  370051 cri.go:89] found id: ""
	I0229 02:33:53.008409  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.008420  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:53.008434  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:53.008502  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:53.047877  370051 cri.go:89] found id: ""
	I0229 02:33:53.047911  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.047922  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:53.047931  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:53.047999  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:53.086080  370051 cri.go:89] found id: ""
	I0229 02:33:53.086107  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.086118  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:53.086127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:53.086193  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:53.128334  370051 cri.go:89] found id: ""
	I0229 02:33:53.128368  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.128378  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:53.128385  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:53.128457  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:53.172201  370051 cri.go:89] found id: ""
	I0229 02:33:53.172232  370051 logs.go:276] 0 containers: []
	W0229 02:33:53.172245  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:53.172258  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:53.172275  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:53.222608  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:53.222648  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:53.239888  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:53.239918  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:53.315827  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:53.315850  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:53.315864  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:53.395457  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:53.395498  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:55.943418  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:55.960562  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:55.960638  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:56.005181  370051 cri.go:89] found id: ""
	I0229 02:33:56.005210  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.005221  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:56.005229  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:56.005293  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:56.046700  370051 cri.go:89] found id: ""
	I0229 02:33:56.046731  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.046743  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:56.046750  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:56.046814  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:56.088459  370051 cri.go:89] found id: ""
	I0229 02:33:56.088486  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.088497  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:56.088505  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:56.088571  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:56.127729  370051 cri.go:89] found id: ""
	I0229 02:33:56.127762  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.127774  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:56.127783  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:56.127862  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:54.491536  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.493973  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.089188  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:58.586570  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.011539  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:58.509708  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:33:56.169980  370051 cri.go:89] found id: ""
	I0229 02:33:56.170011  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.170022  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:56.170030  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:56.170098  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:56.210650  370051 cri.go:89] found id: ""
	I0229 02:33:56.210682  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.210694  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:56.210704  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:56.210771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:56.247342  370051 cri.go:89] found id: ""
	I0229 02:33:56.247380  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.247391  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:56.247400  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:56.247474  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:56.286322  370051 cri.go:89] found id: ""
	I0229 02:33:56.286353  370051 logs.go:276] 0 containers: []
	W0229 02:33:56.286364  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:56.286375  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:56.286393  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:56.335144  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:56.335184  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:56.351322  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:56.351359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:56.424251  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:56.424282  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:56.424299  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:56.506053  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:56.506082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.052805  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:33:59.067508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:33:59.067599  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:33:59.114213  370051 cri.go:89] found id: ""
	I0229 02:33:59.114256  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.114268  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:33:59.114276  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:33:59.114327  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:33:59.161087  370051 cri.go:89] found id: ""
	I0229 02:33:59.161123  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.161136  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:33:59.161145  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:33:59.161217  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:33:59.206071  370051 cri.go:89] found id: ""
	I0229 02:33:59.206101  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.206114  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:33:59.206122  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:33:59.206196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:33:59.245152  370051 cri.go:89] found id: ""
	I0229 02:33:59.245179  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.245188  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:33:59.245194  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:33:59.245247  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:33:59.286047  370051 cri.go:89] found id: ""
	I0229 02:33:59.286080  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.286092  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:33:59.286101  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:33:59.286165  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:33:59.323171  370051 cri.go:89] found id: ""
	I0229 02:33:59.323203  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.323214  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:33:59.323222  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:33:59.323288  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:33:59.364434  370051 cri.go:89] found id: ""
	I0229 02:33:59.364464  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.364477  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:33:59.364485  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:33:59.364554  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:33:59.405902  370051 cri.go:89] found id: ""
	I0229 02:33:59.405929  370051 logs.go:276] 0 containers: []
	W0229 02:33:59.405938  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:33:59.405948  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:33:59.405980  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:33:59.481810  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:33:59.481841  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:33:59.481858  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:33:59.575726  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:33:59.575767  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:33:59.634808  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:33:59.634849  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:33:59.702513  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:33:59.702552  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:33:58.991152  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:01.490426  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:00.587747  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:02.594677  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:01.010009  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:03.509687  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:02.219660  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:02.234037  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:02.234105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:02.277956  370051 cri.go:89] found id: ""
	I0229 02:34:02.277982  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.277991  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:02.277998  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:02.278071  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:02.322832  370051 cri.go:89] found id: ""
	I0229 02:34:02.322856  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.322869  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:02.322878  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:02.322949  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:02.368612  370051 cri.go:89] found id: ""
	I0229 02:34:02.368646  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.368659  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:02.368668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:02.368731  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:02.412436  370051 cri.go:89] found id: ""
	I0229 02:34:02.412466  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.412479  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:02.412486  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:02.412544  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:02.448682  370051 cri.go:89] found id: ""
	I0229 02:34:02.448713  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.448724  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:02.448733  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:02.448803  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:02.486676  370051 cri.go:89] found id: ""
	I0229 02:34:02.486705  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.486723  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:02.486730  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:02.486795  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:02.531814  370051 cri.go:89] found id: ""
	I0229 02:34:02.531841  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.531852  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:02.531860  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:02.531934  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:02.569800  370051 cri.go:89] found id: ""
	I0229 02:34:02.569835  370051 logs.go:276] 0 containers: []
	W0229 02:34:02.569845  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:02.569857  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:02.569871  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:02.623903  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:02.623937  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:02.643856  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:02.643884  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:02.735520  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:02.735544  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:02.735563  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:02.816572  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:02.816612  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.371459  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:05.385179  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:05.385255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:05.424653  370051 cri.go:89] found id: ""
	I0229 02:34:05.424679  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.424687  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:05.424694  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:05.424752  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:05.463726  370051 cri.go:89] found id: ""
	I0229 02:34:05.463754  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.463763  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:05.463769  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:05.463823  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:05.510367  370051 cri.go:89] found id: ""
	I0229 02:34:05.510396  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.510407  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:05.510415  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:05.510480  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:05.548421  370051 cri.go:89] found id: ""
	I0229 02:34:05.548445  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.548455  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:05.548461  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:05.548527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:05.588778  370051 cri.go:89] found id: ""
	I0229 02:34:05.588801  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.588809  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:05.588815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:05.588875  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:05.638449  370051 cri.go:89] found id: ""
	I0229 02:34:05.638479  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.638490  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:05.638506  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:05.638567  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:05.709921  370051 cri.go:89] found id: ""
	I0229 02:34:05.709950  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.709964  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:05.709972  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:05.710038  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:05.756965  370051 cri.go:89] found id: ""
	I0229 02:34:05.756992  370051 logs.go:276] 0 containers: []
	W0229 02:34:05.757000  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:05.757010  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:05.757025  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:05.826878  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:05.826904  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:05.826921  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:05.909205  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:05.909256  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:05.954537  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:05.954594  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:06.004157  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:06.004203  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:03.989381  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.990323  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.491379  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.086296  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:07.586477  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:05.511758  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.009545  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:10.010247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:08.522975  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:08.539247  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:08.539326  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:08.579776  370051 cri.go:89] found id: ""
	I0229 02:34:08.579806  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.579817  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:08.579826  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:08.579890  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:08.628415  370051 cri.go:89] found id: ""
	I0229 02:34:08.628444  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.628456  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:08.628468  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:08.628534  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:08.690499  370051 cri.go:89] found id: ""
	I0229 02:34:08.690530  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.690540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:08.690547  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:08.690613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:08.739755  370051 cri.go:89] found id: ""
	I0229 02:34:08.739788  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.739801  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:08.739809  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:08.739906  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:08.781693  370051 cri.go:89] found id: ""
	I0229 02:34:08.781721  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.781733  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:08.781742  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:08.781808  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:08.818605  370051 cri.go:89] found id: ""
	I0229 02:34:08.818637  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.818645  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:08.818652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:08.818713  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:08.861533  370051 cri.go:89] found id: ""
	I0229 02:34:08.861559  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.861569  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:08.861578  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:08.861658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:08.902727  370051 cri.go:89] found id: ""
	I0229 02:34:08.902758  370051 logs.go:276] 0 containers: []
	W0229 02:34:08.902771  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:08.902784  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:08.902801  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:08.948527  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:08.948567  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:08.999883  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:08.999916  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:09.015438  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:09.015467  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:09.087965  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:09.087994  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:09.088010  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:10.990135  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.991074  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:10.085517  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.086653  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:14.086817  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:12.510247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:15.010412  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:11.671443  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:11.702197  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:11.702322  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:11.755104  370051 cri.go:89] found id: ""
	I0229 02:34:11.755136  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.755147  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:11.755153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:11.755204  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:11.794190  370051 cri.go:89] found id: ""
	I0229 02:34:11.794218  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.794239  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:11.794247  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:11.794310  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:11.837330  370051 cri.go:89] found id: ""
	I0229 02:34:11.837360  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.837372  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:11.837380  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:11.837447  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:11.876682  370051 cri.go:89] found id: ""
	I0229 02:34:11.876716  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.876726  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:11.876734  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:11.876805  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:11.922172  370051 cri.go:89] found id: ""
	I0229 02:34:11.922239  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.922262  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:11.922271  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:11.922341  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:11.962218  370051 cri.go:89] found id: ""
	I0229 02:34:11.962270  370051 logs.go:276] 0 containers: []
	W0229 02:34:11.962283  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:11.962291  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:11.962375  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:12.002075  370051 cri.go:89] found id: ""
	I0229 02:34:12.002101  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.002110  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:12.002117  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:12.002169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:12.043337  370051 cri.go:89] found id: ""
	I0229 02:34:12.043378  370051 logs.go:276] 0 containers: []
	W0229 02:34:12.043399  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:12.043412  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:12.043428  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:12.094458  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:12.094491  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:12.112374  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:12.112401  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:12.193665  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:12.193689  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:12.193717  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:12.282510  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:12.282553  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:14.828451  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:14.843626  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:14.843690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:14.884181  370051 cri.go:89] found id: ""
	I0229 02:34:14.884214  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.884226  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:14.884235  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:14.884302  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:14.926312  370051 cri.go:89] found id: ""
	I0229 02:34:14.926347  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.926361  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:14.926369  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:14.926436  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:14.969147  370051 cri.go:89] found id: ""
	I0229 02:34:14.969182  370051 logs.go:276] 0 containers: []
	W0229 02:34:14.969195  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:14.969207  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:14.969277  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:15.013000  370051 cri.go:89] found id: ""
	I0229 02:34:15.013045  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.013055  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:15.013064  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:15.013120  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:15.055811  370051 cri.go:89] found id: ""
	I0229 02:34:15.055849  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.055861  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:15.055869  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:15.055939  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:15.100736  370051 cri.go:89] found id: ""
	I0229 02:34:15.100768  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.100780  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:15.100789  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:15.100867  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:15.140115  370051 cri.go:89] found id: ""
	I0229 02:34:15.140151  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.140164  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:15.140172  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:15.140239  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:15.183545  370051 cri.go:89] found id: ""
	I0229 02:34:15.183576  370051 logs.go:276] 0 containers: []
	W0229 02:34:15.183588  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:15.183602  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:15.183621  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:15.258646  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:15.258676  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:15.258693  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:15.347035  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:15.347082  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:15.407148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:15.407178  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:15.466695  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:15.466741  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:15.490797  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.990851  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:16.585993  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:18.587604  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.509114  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:19.509856  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:17.989102  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:18.005052  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:18.005126  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:18.044687  370051 cri.go:89] found id: ""
	I0229 02:34:18.044714  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.044725  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:18.044739  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:18.044815  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:18.085904  370051 cri.go:89] found id: ""
	I0229 02:34:18.085934  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.085944  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:18.085952  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:18.086017  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:18.129958  370051 cri.go:89] found id: ""
	I0229 02:34:18.129985  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.129994  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:18.129999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:18.130052  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:18.166942  370051 cri.go:89] found id: ""
	I0229 02:34:18.166979  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.166991  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:18.167000  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:18.167056  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:18.205297  370051 cri.go:89] found id: ""
	I0229 02:34:18.205324  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.205331  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:18.205337  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:18.205410  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:18.246415  370051 cri.go:89] found id: ""
	I0229 02:34:18.246448  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.246461  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:18.246469  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:18.246527  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:18.285534  370051 cri.go:89] found id: ""
	I0229 02:34:18.285573  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.285585  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:18.285600  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:18.285662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:18.327624  370051 cri.go:89] found id: ""
	I0229 02:34:18.327651  370051 logs.go:276] 0 containers: []
	W0229 02:34:18.327659  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:18.327670  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:18.327684  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:18.383307  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:18.383351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:18.408127  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:18.408162  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:18.502036  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:18.502070  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:18.502093  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:18.582289  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:18.582340  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:20.490582  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:22.990210  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.086446  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:23.586600  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.511411  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:24.009976  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:21.135649  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:21.149411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:21.149498  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:21.198246  370051 cri.go:89] found id: ""
	I0229 02:34:21.198286  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.198298  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:21.198306  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:21.198378  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:21.240168  370051 cri.go:89] found id: ""
	I0229 02:34:21.240195  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.240203  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:21.240209  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:21.240275  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:21.281243  370051 cri.go:89] found id: ""
	I0229 02:34:21.281277  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.281288  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:21.281296  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:21.281359  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:21.321573  370051 cri.go:89] found id: ""
	I0229 02:34:21.321609  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.321621  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:21.321629  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:21.321693  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:21.375156  370051 cri.go:89] found id: ""
	I0229 02:34:21.375212  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.375226  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:21.375234  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:21.375308  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:21.430450  370051 cri.go:89] found id: ""
	I0229 02:34:21.430487  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.430499  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:21.430508  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:21.430576  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:21.475095  370051 cri.go:89] found id: ""
	I0229 02:34:21.475124  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.475135  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:21.475144  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:21.475215  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:21.517378  370051 cri.go:89] found id: ""
	I0229 02:34:21.517403  370051 logs.go:276] 0 containers: []
	W0229 02:34:21.517412  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:21.517424  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:21.517444  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:21.534103  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:21.534147  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:21.608375  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:21.608400  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:21.608412  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:21.691912  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:21.691950  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:21.744366  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:21.744406  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.295384  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:24.309456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:24.309539  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:24.370125  370051 cri.go:89] found id: ""
	I0229 02:34:24.370156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.370167  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:24.370175  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:24.370256  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:24.439458  370051 cri.go:89] found id: ""
	I0229 02:34:24.439487  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.439499  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:24.439506  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:24.439639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:24.478070  370051 cri.go:89] found id: ""
	I0229 02:34:24.478105  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.478119  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:24.478127  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:24.478194  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:24.517128  370051 cri.go:89] found id: ""
	I0229 02:34:24.517156  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.517168  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:24.517176  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:24.517243  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:24.555502  370051 cri.go:89] found id: ""
	I0229 02:34:24.555537  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.555549  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:24.555557  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:24.555625  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:24.601261  370051 cri.go:89] found id: ""
	I0229 02:34:24.601295  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.601307  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:24.601315  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:24.601389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:24.639110  370051 cri.go:89] found id: ""
	I0229 02:34:24.639141  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.639153  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:24.639161  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:24.639224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:24.681448  370051 cri.go:89] found id: ""
	I0229 02:34:24.681478  370051 logs.go:276] 0 containers: []
	W0229 02:34:24.681487  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:24.681498  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:24.681517  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:24.730735  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:24.730775  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:24.746996  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:24.747031  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:24.827581  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:24.827608  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:24.827628  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:24.909551  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:24.909596  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:24.990581  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.489787  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:25.586672  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.586999  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:26.509819  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:29.009014  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:27.455967  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:27.477411  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:27.477487  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:27.523163  370051 cri.go:89] found id: ""
	I0229 02:34:27.523189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.523198  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:27.523203  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:27.523258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:27.562298  370051 cri.go:89] found id: ""
	I0229 02:34:27.562330  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.562343  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:27.562350  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:27.562420  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:27.603506  370051 cri.go:89] found id: ""
	I0229 02:34:27.603532  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.603540  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:27.603554  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:27.603619  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:27.646971  370051 cri.go:89] found id: ""
	I0229 02:34:27.647002  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.647014  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:27.647031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:27.647109  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:27.685124  370051 cri.go:89] found id: ""
	I0229 02:34:27.685149  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.685160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:27.685169  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:27.685235  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:27.726976  370051 cri.go:89] found id: ""
	I0229 02:34:27.727007  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.727018  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:27.727026  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:27.727089  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:27.767159  370051 cri.go:89] found id: ""
	I0229 02:34:27.767189  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.767197  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:27.767204  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:27.767272  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:27.810377  370051 cri.go:89] found id: ""
	I0229 02:34:27.810411  370051 logs.go:276] 0 containers: []
	W0229 02:34:27.810420  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:27.810431  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:27.810447  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:27.858094  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:27.858136  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:27.874407  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:27.874440  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:27.953065  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:27.953092  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:27.953108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:28.042244  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:28.042278  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:30.588227  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:30.604954  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:30.605037  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:30.642069  370051 cri.go:89] found id: ""
	I0229 02:34:30.642100  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.642108  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:30.642119  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:30.642187  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:30.686212  370051 cri.go:89] found id: ""
	I0229 02:34:30.686264  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.686277  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:30.686285  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:30.686364  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:30.726668  370051 cri.go:89] found id: ""
	I0229 02:34:30.726702  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.726715  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:30.726723  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:30.726788  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:30.766850  370051 cri.go:89] found id: ""
	I0229 02:34:30.766883  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.766895  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:30.766904  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:30.766979  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:30.808972  370051 cri.go:89] found id: ""
	I0229 02:34:30.809002  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.809015  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:30.809023  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:30.809093  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:30.851992  370051 cri.go:89] found id: ""
	I0229 02:34:30.852016  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.852025  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:30.852031  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:30.852096  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:30.891100  370051 cri.go:89] found id: ""
	I0229 02:34:30.891132  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.891144  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:30.891157  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:30.891227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:30.931740  370051 cri.go:89] found id: ""
	I0229 02:34:30.931768  370051 logs.go:276] 0 containers: []
	W0229 02:34:30.931777  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:30.931787  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:30.931808  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:31.010896  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:31.010919  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:31.010936  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:31.094626  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:31.094662  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:29.490211  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.490659  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:30.086898  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:32.587485  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.010003  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:33.510267  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:31.150765  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:31.150804  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:31.202932  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:31.202976  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:33.723355  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:33.738651  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:33.738753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:33.778255  370051 cri.go:89] found id: ""
	I0229 02:34:33.778287  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.778299  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:33.778307  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:33.778384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:33.818360  370051 cri.go:89] found id: ""
	I0229 02:34:33.818396  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.818406  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:33.818412  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:33.818564  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:33.866781  370051 cri.go:89] found id: ""
	I0229 02:34:33.866814  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.866824  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:33.866831  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:33.866891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:33.910013  370051 cri.go:89] found id: ""
	I0229 02:34:33.910051  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.910063  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:33.910072  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:33.910146  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:33.956068  370051 cri.go:89] found id: ""
	I0229 02:34:33.956098  370051 logs.go:276] 0 containers: []
	W0229 02:34:33.956106  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:33.956113  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:33.956170  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:34.004997  370051 cri.go:89] found id: ""
	I0229 02:34:34.005027  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.005038  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:34.005047  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:34.005113  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:34.059266  370051 cri.go:89] found id: ""
	I0229 02:34:34.059293  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.059302  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:34.059307  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:34.059363  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:34.105601  370051 cri.go:89] found id: ""
	I0229 02:34:34.105631  370051 logs.go:276] 0 containers: []
	W0229 02:34:34.105643  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:34.105654  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:34.105669  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:34.208723  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:34.208764  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:34.262105  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:34.262137  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:34.314528  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:34.314571  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:34.332441  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:34.332477  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:34.406303  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:33.990257  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.490844  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:35.085482  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:37.086532  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:39.087022  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.015574  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:38.510064  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:36.906814  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:36.922297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:36.922377  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:36.967550  370051 cri.go:89] found id: ""
	I0229 02:34:36.967578  370051 logs.go:276] 0 containers: []
	W0229 02:34:36.967589  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:36.967599  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:36.967662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:37.007589  370051 cri.go:89] found id: ""
	I0229 02:34:37.007614  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.007624  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:37.007632  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:37.007706  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:37.048230  370051 cri.go:89] found id: ""
	I0229 02:34:37.048260  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.048273  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:37.048281  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:37.048354  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:37.089329  370051 cri.go:89] found id: ""
	I0229 02:34:37.089355  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.089365  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:37.089373  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:37.089441  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:37.144654  370051 cri.go:89] found id: ""
	I0229 02:34:37.144687  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.144699  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:37.144708  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:37.144778  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:37.203822  370051 cri.go:89] found id: ""
	I0229 02:34:37.203857  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.203868  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:37.203876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:37.203948  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:37.250369  370051 cri.go:89] found id: ""
	I0229 02:34:37.250398  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.250410  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:37.250417  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:37.250490  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:37.290924  370051 cri.go:89] found id: ""
	I0229 02:34:37.290957  370051 logs.go:276] 0 containers: []
	W0229 02:34:37.290969  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:37.290981  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:37.290995  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:37.343878  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:37.343920  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:37.359307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:37.359336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:37.435264  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:37.435292  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:37.435309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:37.518274  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:37.518309  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:40.062232  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:40.079883  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:40.079957  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:40.123826  370051 cri.go:89] found id: ""
	I0229 02:34:40.123856  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.123866  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:40.123874  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:40.123943  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:40.190273  370051 cri.go:89] found id: ""
	I0229 02:34:40.190321  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.190332  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:40.190338  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:40.190395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:40.232921  370051 cri.go:89] found id: ""
	I0229 02:34:40.232949  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.232961  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:40.232968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:40.233034  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:40.273490  370051 cri.go:89] found id: ""
	I0229 02:34:40.273517  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.273526  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:40.273538  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:40.273594  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:40.317121  370051 cri.go:89] found id: ""
	I0229 02:34:40.317152  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.317163  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:40.317171  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:40.317230  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:40.363347  370051 cri.go:89] found id: ""
	I0229 02:34:40.363380  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.363389  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:40.363396  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:40.363459  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:40.407187  370051 cri.go:89] found id: ""
	I0229 02:34:40.407213  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.407222  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:40.407231  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:40.407282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:40.447185  370051 cri.go:89] found id: ""
	I0229 02:34:40.447218  370051 logs.go:276] 0 containers: []
	W0229 02:34:40.447229  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:40.447242  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:40.447258  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:40.496998  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:40.497029  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:40.512520  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:40.512549  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:40.589150  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:40.589173  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:40.589190  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:40.677054  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:40.677096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:38.991307  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:40.992688  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.490195  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:41.585962  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.586942  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:41.009837  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.510138  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:43.222265  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:43.236567  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:43.236629  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:43.282917  370051 cri.go:89] found id: ""
	I0229 02:34:43.282959  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.282976  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:43.282982  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:43.283049  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:43.329273  370051 cri.go:89] found id: ""
	I0229 02:34:43.329302  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.329313  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:43.329321  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:43.329386  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:43.366696  370051 cri.go:89] found id: ""
	I0229 02:34:43.366723  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.366732  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:43.366739  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:43.366800  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:43.405793  370051 cri.go:89] found id: ""
	I0229 02:34:43.405820  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.405828  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:43.405834  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:43.405888  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:43.442870  370051 cri.go:89] found id: ""
	I0229 02:34:43.442898  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.442906  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:43.442912  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:43.442964  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:43.484581  370051 cri.go:89] found id: ""
	I0229 02:34:43.484615  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.484626  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:43.484635  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:43.484702  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:43.530931  370051 cri.go:89] found id: ""
	I0229 02:34:43.530954  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.530963  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:43.530968  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:43.531024  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:43.572810  370051 cri.go:89] found id: ""
	I0229 02:34:43.572838  370051 logs.go:276] 0 containers: []
	W0229 02:34:43.572850  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:43.572867  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:43.572883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:43.622815  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:43.622854  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:43.637972  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:43.638012  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:43.713704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:43.713728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:43.713746  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:43.797178  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:43.797220  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:45.490670  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:47.989828  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:45.587464  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:48.090384  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:46.009454  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:48.010403  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:46.347159  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:46.361601  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:46.361682  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:46.399751  370051 cri.go:89] found id: ""
	I0229 02:34:46.399784  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.399795  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:46.399804  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:46.399870  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:46.445367  370051 cri.go:89] found id: ""
	I0229 02:34:46.445398  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.445407  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:46.445413  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:46.445486  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:46.490323  370051 cri.go:89] found id: ""
	I0229 02:34:46.490363  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.490385  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:46.490393  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:46.490473  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:46.531406  370051 cri.go:89] found id: ""
	I0229 02:34:46.531441  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.531450  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:46.531456  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:46.531507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:46.572759  370051 cri.go:89] found id: ""
	I0229 02:34:46.572787  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.572795  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:46.572804  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:46.572908  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:46.613055  370051 cri.go:89] found id: ""
	I0229 02:34:46.613083  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.613093  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:46.613099  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:46.613153  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:46.657504  370051 cri.go:89] found id: ""
	I0229 02:34:46.657536  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.657544  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:46.657550  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:46.657605  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:46.698008  370051 cri.go:89] found id: ""
	I0229 02:34:46.698057  370051 logs.go:276] 0 containers: []
	W0229 02:34:46.698068  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:46.698080  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:46.698097  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:46.746648  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:46.746682  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:46.761190  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:46.761219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:46.843379  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:46.843403  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:46.843415  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:46.933493  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:46.933546  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:49.491837  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:49.508647  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:49.508717  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:49.550752  370051 cri.go:89] found id: ""
	I0229 02:34:49.550788  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.550800  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:49.550809  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:49.550883  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:49.597623  370051 cri.go:89] found id: ""
	I0229 02:34:49.597663  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.597675  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:49.597683  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:49.597764  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:49.635207  370051 cri.go:89] found id: ""
	I0229 02:34:49.635230  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.635238  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:49.635282  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:49.635336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:49.674664  370051 cri.go:89] found id: ""
	I0229 02:34:49.674696  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.674708  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:49.674716  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:49.674777  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:49.715391  370051 cri.go:89] found id: ""
	I0229 02:34:49.715420  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.715433  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:49.715442  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:49.715497  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:49.753318  370051 cri.go:89] found id: ""
	I0229 02:34:49.753352  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.753373  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:49.753382  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:49.753451  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:49.791342  370051 cri.go:89] found id: ""
	I0229 02:34:49.791369  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.791377  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:49.791384  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:49.791456  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:49.838148  370051 cri.go:89] found id: ""
	I0229 02:34:49.838181  370051 logs.go:276] 0 containers: []
	W0229 02:34:49.838191  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:49.838204  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:49.838244  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:49.891532  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:49.891568  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:49.917625  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:49.917664  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:50.019436  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:50.019457  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:50.019472  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:50.108302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:50.108349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:49.991272  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.491139  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:50.586652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.586940  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:50.509504  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:53.010818  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:52.654561  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:52.668331  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:52.668402  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:52.718431  370051 cri.go:89] found id: ""
	I0229 02:34:52.718471  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.718484  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:52.718493  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:52.718551  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:52.757913  370051 cri.go:89] found id: ""
	I0229 02:34:52.757946  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.757957  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:52.757965  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:52.758035  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:52.796792  370051 cri.go:89] found id: ""
	I0229 02:34:52.796821  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.796833  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:52.796842  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:52.796913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:52.832157  370051 cri.go:89] found id: ""
	I0229 02:34:52.832187  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.832196  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:52.832203  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:52.832264  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:52.879170  370051 cri.go:89] found id: ""
	I0229 02:34:52.879197  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.879206  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:52.879212  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:52.879265  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:52.924219  370051 cri.go:89] found id: ""
	I0229 02:34:52.924249  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.924258  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:52.924264  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:52.924318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:52.980422  370051 cri.go:89] found id: ""
	I0229 02:34:52.980450  370051 logs.go:276] 0 containers: []
	W0229 02:34:52.980457  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:52.980463  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:52.980525  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:53.026393  370051 cri.go:89] found id: ""
	I0229 02:34:53.026418  370051 logs.go:276] 0 containers: []
	W0229 02:34:53.026426  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:53.026436  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:53.026453  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:53.075135  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:53.075174  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:53.092197  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:53.092223  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:53.164397  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:53.164423  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:53.164439  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:53.250310  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:53.250366  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:55.792993  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:55.807152  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:55.807229  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:55.867791  370051 cri.go:89] found id: ""
	I0229 02:34:55.867821  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.867830  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:55.867847  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:55.867925  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:55.922960  370051 cri.go:89] found id: ""
	I0229 02:34:55.922989  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.923001  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:55.923009  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:55.923076  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:55.972510  370051 cri.go:89] found id: ""
	I0229 02:34:55.972541  370051 logs.go:276] 0 containers: []
	W0229 02:34:55.972552  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:55.972560  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:55.972632  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:56.011948  370051 cri.go:89] found id: ""
	I0229 02:34:56.011980  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.011990  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:56.011999  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:56.012077  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:56.052624  370051 cri.go:89] found id: ""
	I0229 02:34:56.052653  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.052662  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:56.052668  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:56.052722  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:56.089075  370051 cri.go:89] found id: ""
	I0229 02:34:56.089100  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.089108  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:56.089114  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:56.089180  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:56.130369  370051 cri.go:89] found id: ""
	I0229 02:34:56.130403  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.130416  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:56.130424  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:56.130496  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:54.989569  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:56.991424  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:55.085652  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:57.585291  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:59.586439  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:55.509734  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:57.510165  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:59.511749  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:34:56.177812  370051 cri.go:89] found id: ""
	I0229 02:34:56.177843  370051 logs.go:276] 0 containers: []
	W0229 02:34:56.177854  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:56.177875  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:56.177894  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:56.224294  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:56.224336  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:56.275874  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:56.275909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:56.291172  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:56.291202  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:56.364839  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:56.364870  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:56.364888  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:58.950871  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:34:58.966327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:34:58.966389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:34:59.005914  370051 cri.go:89] found id: ""
	I0229 02:34:59.005952  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.005968  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:34:59.005976  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:34:59.006045  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:34:59.043962  370051 cri.go:89] found id: ""
	I0229 02:34:59.043993  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.044005  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:34:59.044013  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:34:59.044167  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:34:59.089398  370051 cri.go:89] found id: ""
	I0229 02:34:59.089426  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.089434  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:34:59.089440  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:34:59.089491  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:34:59.130786  370051 cri.go:89] found id: ""
	I0229 02:34:59.130815  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.130824  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:34:59.130830  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:34:59.130909  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:34:59.174807  370051 cri.go:89] found id: ""
	I0229 02:34:59.174836  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.174848  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:34:59.174855  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:34:59.174929  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:34:59.217745  370051 cri.go:89] found id: ""
	I0229 02:34:59.217792  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.217800  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:34:59.217806  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:34:59.217858  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:34:59.260906  370051 cri.go:89] found id: ""
	I0229 02:34:59.260939  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.260950  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:34:59.260957  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:34:59.261025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:34:59.299114  370051 cri.go:89] found id: ""
	I0229 02:34:59.299140  370051 logs.go:276] 0 containers: []
	W0229 02:34:59.299150  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:34:59.299161  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:34:59.299173  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:34:59.349630  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:34:59.349672  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:34:59.365679  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:34:59.365710  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:34:59.438234  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:34:59.438261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:34:59.438280  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:34:59.524185  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:34:59.524219  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:34:58.991975  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:01.489719  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:03.490315  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.087731  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:04.585197  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.008802  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:04.509210  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:02.068320  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:02.082910  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:02.082988  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:02.122095  370051 cri.go:89] found id: ""
	I0229 02:35:02.122132  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.122145  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:02.122153  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:02.122245  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:02.160982  370051 cri.go:89] found id: ""
	I0229 02:35:02.161013  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.161029  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:02.161043  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:02.161108  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:02.200603  370051 cri.go:89] found id: ""
	I0229 02:35:02.200637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.200650  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:02.200658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:02.200746  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:02.243100  370051 cri.go:89] found id: ""
	I0229 02:35:02.243126  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.243134  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:02.243140  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:02.243207  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:02.282758  370051 cri.go:89] found id: ""
	I0229 02:35:02.282793  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.282806  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:02.282815  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:02.282884  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:02.324402  370051 cri.go:89] found id: ""
	I0229 02:35:02.324434  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.324444  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:02.324455  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:02.324520  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:02.368608  370051 cri.go:89] found id: ""
	I0229 02:35:02.368637  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.368650  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:02.368658  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:02.368726  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:02.411449  370051 cri.go:89] found id: ""
	I0229 02:35:02.411484  370051 logs.go:276] 0 containers: []
	W0229 02:35:02.411497  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:02.411509  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:02.411526  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:02.427942  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:02.427974  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:02.498848  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:02.498884  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:02.498902  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:02.585701  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:02.585749  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:02.642055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:02.642096  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.201769  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:05.215944  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:05.216020  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:05.254080  370051 cri.go:89] found id: ""
	I0229 02:35:05.254107  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.254121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:05.254128  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:05.254179  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:05.296990  370051 cri.go:89] found id: ""
	I0229 02:35:05.297022  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.297034  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:05.297042  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:05.297111  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:05.336241  370051 cri.go:89] found id: ""
	I0229 02:35:05.336275  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.336290  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:05.336299  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:05.336395  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:05.377620  370051 cri.go:89] found id: ""
	I0229 02:35:05.377649  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.377658  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:05.377664  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:05.377712  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:05.416275  370051 cri.go:89] found id: ""
	I0229 02:35:05.416303  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.416311  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:05.416318  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:05.416373  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:05.455375  370051 cri.go:89] found id: ""
	I0229 02:35:05.455412  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.455426  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:05.455436  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:05.455507  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:05.495862  370051 cri.go:89] found id: ""
	I0229 02:35:05.495887  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.495897  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:05.495905  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:05.495969  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:05.541218  370051 cri.go:89] found id: ""
	I0229 02:35:05.541247  370051 logs.go:276] 0 containers: []
	W0229 02:35:05.541260  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:05.541273  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:05.541288  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:05.629982  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:05.630023  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:05.719026  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:05.719066  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:05.785318  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:05.785359  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:05.801181  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:05.801214  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:05.871333  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:05.490857  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:07.991044  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:06.587458  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:09.086313  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:06.510265  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:08.510391  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:08.371982  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:08.386451  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:08.386514  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:08.430045  370051 cri.go:89] found id: ""
	I0229 02:35:08.430077  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.430090  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:08.430099  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:08.430169  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:08.470547  370051 cri.go:89] found id: ""
	I0229 02:35:08.470583  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.470596  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:08.470604  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:08.470671  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:08.512637  370051 cri.go:89] found id: ""
	I0229 02:35:08.512676  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.512687  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:08.512695  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:08.512759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:08.556228  370051 cri.go:89] found id: ""
	I0229 02:35:08.556263  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.556271  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:08.556277  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:08.556335  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:08.613838  370051 cri.go:89] found id: ""
	I0229 02:35:08.613868  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.613878  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:08.613884  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:08.613940  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:08.686408  370051 cri.go:89] found id: ""
	I0229 02:35:08.686442  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.686454  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:08.686462  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:08.686519  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:08.725665  370051 cri.go:89] found id: ""
	I0229 02:35:08.725697  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.725710  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:08.725719  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:08.725776  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:08.765639  370051 cri.go:89] found id: ""
	I0229 02:35:08.765666  370051 logs.go:276] 0 containers: []
	W0229 02:35:08.765674  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:08.765684  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:08.765695  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:08.813097  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:08.813135  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:08.828880  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:08.828909  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:08.903237  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:08.903261  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:08.903281  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:08.991710  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:08.991745  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:10.491022  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:12.491159  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.086828  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:13.586274  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.009650  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:13.011571  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:11.536724  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:11.551614  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:11.551690  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:11.593078  370051 cri.go:89] found id: ""
	I0229 02:35:11.593110  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.593121  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:11.593129  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:11.593185  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:11.645696  370051 cri.go:89] found id: ""
	I0229 02:35:11.645729  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.645742  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:11.645751  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:11.645820  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:11.691181  370051 cri.go:89] found id: ""
	I0229 02:35:11.691213  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.691226  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:11.691245  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:11.691318  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:11.745906  370051 cri.go:89] found id: ""
	I0229 02:35:11.745933  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.745946  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:11.745953  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:11.746019  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:11.784895  370051 cri.go:89] found id: ""
	I0229 02:35:11.784927  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.784940  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:11.784949  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:11.785025  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:11.825341  370051 cri.go:89] found id: ""
	I0229 02:35:11.825372  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.825384  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:11.825392  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:11.825464  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:11.862454  370051 cri.go:89] found id: ""
	I0229 02:35:11.862492  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.862505  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:11.862523  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:11.862604  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:11.908424  370051 cri.go:89] found id: ""
	I0229 02:35:11.908450  370051 logs.go:276] 0 containers: []
	W0229 02:35:11.908459  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:11.908469  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:11.908487  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:11.956274  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:11.956313  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:11.972363  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:11.972397  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:12.052030  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:12.052057  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:12.052078  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:12.138388  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:12.138431  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:14.691474  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:14.724652  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:14.724739  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:14.765210  370051 cri.go:89] found id: ""
	I0229 02:35:14.765237  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.765246  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:14.765253  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:14.765306  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:14.808226  370051 cri.go:89] found id: ""
	I0229 02:35:14.808258  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.808270  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:14.808287  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:14.808357  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:14.847999  370051 cri.go:89] found id: ""
	I0229 02:35:14.848030  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.848041  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:14.848049  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:14.848123  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:14.887221  370051 cri.go:89] found id: ""
	I0229 02:35:14.887248  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.887256  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:14.887263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:14.887339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:14.929905  370051 cri.go:89] found id: ""
	I0229 02:35:14.929933  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.929950  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:14.929956  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:14.930011  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:14.969697  370051 cri.go:89] found id: ""
	I0229 02:35:14.969739  370051 logs.go:276] 0 containers: []
	W0229 02:35:14.969761  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:14.969770  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:14.969837  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:15.013387  370051 cri.go:89] found id: ""
	I0229 02:35:15.013418  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.013429  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:15.013437  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:15.013493  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:15.058199  370051 cri.go:89] found id: ""
	I0229 02:35:15.058240  370051 logs.go:276] 0 containers: []
	W0229 02:35:15.058253  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:15.058270  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:15.058287  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:15.110165  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:15.110213  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:15.127417  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:15.127452  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:15.203330  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:15.203370  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:15.203405  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:15.283455  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:15.283501  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:14.991352  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.490127  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:15.586556  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:18.085962  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:15.509530  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.512518  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:20.009873  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:17.829187  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:17.844678  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:17.844759  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:17.885549  370051 cri.go:89] found id: ""
	I0229 02:35:17.885581  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.885594  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:17.885601  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:17.885670  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:17.925652  370051 cri.go:89] found id: ""
	I0229 02:35:17.925679  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.925691  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:17.925699  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:17.925766  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:17.963172  370051 cri.go:89] found id: ""
	I0229 02:35:17.963203  370051 logs.go:276] 0 containers: []
	W0229 02:35:17.963215  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:17.963224  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:17.963282  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:18.003528  370051 cri.go:89] found id: ""
	I0229 02:35:18.003560  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.003572  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:18.003579  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:18.003644  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:18.046494  370051 cri.go:89] found id: ""
	I0229 02:35:18.046526  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.046537  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:18.046545  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:18.046613  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:18.084963  370051 cri.go:89] found id: ""
	I0229 02:35:18.084993  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.085004  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:18.085013  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:18.085074  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:18.125521  370051 cri.go:89] found id: ""
	I0229 02:35:18.125547  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.125556  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:18.125563  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:18.125623  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:18.169963  370051 cri.go:89] found id: ""
	I0229 02:35:18.169995  370051 logs.go:276] 0 containers: []
	W0229 02:35:18.170006  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:18.170020  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:18.170035  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:18.225414  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:18.225460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:18.242069  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:18.242108  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:18.312704  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:18.312728  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:18.312742  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:18.397206  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:18.397249  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:20.968000  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:20.983115  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:20.983196  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:21.025710  370051 cri.go:89] found id: ""
	I0229 02:35:21.025735  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.025743  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:21.025749  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:21.025812  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:21.065825  370051 cri.go:89] found id: ""
	I0229 02:35:21.065854  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.065862  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:21.065868  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:21.065928  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:21.104738  370051 cri.go:89] found id: ""
	I0229 02:35:21.104770  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.104782  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:21.104790  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:21.104871  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:19.990622  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.491026  369591 pod_ready.go:102] pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.491059  369591 pod_ready.go:81] duration metric: took 4m0.008454624s waiting for pod "metrics-server-57f55c9bc5-zghwq" in "kube-system" namespace to be "Ready" ...
	E0229 02:35:22.491069  369591 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:35:22.491077  369591 pod_ready.go:38] duration metric: took 4m5.576507129s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:22.491094  369591 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:35:22.491124  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:22.491174  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:22.562384  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:22.562412  369591 cri.go:89] found id: ""
	I0229 02:35:22.562422  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:22.562487  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.567997  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:22.568073  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:22.632786  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:22.632811  369591 cri.go:89] found id: ""
	I0229 02:35:22.632822  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:22.632887  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.637899  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:22.637975  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:22.681988  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:22.682014  369591 cri.go:89] found id: ""
	I0229 02:35:22.682024  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:22.682084  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.687515  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:22.687606  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:22.732907  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:22.732931  369591 cri.go:89] found id: ""
	I0229 02:35:22.732939  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:22.732995  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.737695  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:22.737758  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:22.779316  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:22.779341  369591 cri.go:89] found id: ""
	I0229 02:35:22.779349  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:22.779413  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.786533  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:22.786617  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:22.834391  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:22.834420  369591 cri.go:89] found id: ""
	I0229 02:35:22.834430  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:22.834500  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.839386  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:22.839458  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:22.881275  369591 cri.go:89] found id: ""
	I0229 02:35:22.881304  369591 logs.go:276] 0 containers: []
	W0229 02:35:22.881317  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:22.881326  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:22.881404  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:22.932822  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:22.932846  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:22.932850  369591 cri.go:89] found id: ""
	I0229 02:35:22.932858  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:22.932913  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.938541  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:22.943263  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:22.943288  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:22.994089  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:22.994122  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:23.051780  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:23.051821  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:23.099220  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:23.099251  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:23.157383  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:23.157429  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:23.206125  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:23.206180  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:23.261950  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:23.261982  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:23.324394  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:23.324427  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:23.400608  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:23.400648  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:20.589079  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:23.088469  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:22.510074  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:24.002388  369869 pod_ready.go:81] duration metric: took 4m0.000212386s waiting for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" ...
	E0229 02:35:24.002420  369869 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-86frx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0229 02:35:24.002439  369869 pod_ready.go:38] duration metric: took 4m6.701505951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:35:24.002490  369869 kubeadm.go:640] restartCluster took 4m24.423602043s
	W0229 02:35:24.002593  369869 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0229 02:35:24.002621  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:35:21.147180  370051 cri.go:89] found id: ""
	I0229 02:35:21.147211  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.147221  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:21.147228  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:21.147284  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:21.187240  370051 cri.go:89] found id: ""
	I0229 02:35:21.187275  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.187287  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:21.187295  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:21.187389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:21.228873  370051 cri.go:89] found id: ""
	I0229 02:35:21.228899  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.228917  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:21.228924  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:21.228992  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:21.268827  370051 cri.go:89] found id: ""
	I0229 02:35:21.268856  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.268867  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:21.268876  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:21.268970  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:21.313253  370051 cri.go:89] found id: ""
	I0229 02:35:21.313288  370051 logs.go:276] 0 containers: []
	W0229 02:35:21.313297  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:21.313307  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:21.313328  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:21.448089  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:21.448120  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:21.448146  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:21.539941  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:21.539983  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:21.590148  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:21.590186  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:21.647760  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:21.647797  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:24.165842  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:24.183263  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:24.183345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:24.233173  370051 cri.go:89] found id: ""
	I0229 02:35:24.233208  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.233219  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:24.233228  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:24.233301  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:24.276937  370051 cri.go:89] found id: ""
	I0229 02:35:24.276977  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.276989  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:24.276998  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:24.277066  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:24.314629  370051 cri.go:89] found id: ""
	I0229 02:35:24.314665  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.314678  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:24.314686  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:24.314753  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:24.367585  370051 cri.go:89] found id: ""
	I0229 02:35:24.367618  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.367630  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:24.367639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:24.367709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:24.451128  370051 cri.go:89] found id: ""
	I0229 02:35:24.451151  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.451160  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:24.451167  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:24.451258  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:24.497302  370051 cri.go:89] found id: ""
	I0229 02:35:24.497336  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.497348  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:24.497357  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:24.497431  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:24.544593  370051 cri.go:89] found id: ""
	I0229 02:35:24.544621  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.544632  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:24.544640  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:24.544714  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:24.584570  370051 cri.go:89] found id: ""
	I0229 02:35:24.584601  370051 logs.go:276] 0 containers: []
	W0229 02:35:24.584613  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:24.584626  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:24.584645  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:24.669019  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:24.669044  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:24.669061  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:24.752163  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:24.752205  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:24.811945  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:24.811985  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:24.874832  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:24.874873  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:23.928222  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:23.928275  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:23.983171  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:23.983216  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:23.999343  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:23.999382  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:24.180422  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:24.180476  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:26.745283  369591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:26.768785  369591 api_server.go:72] duration metric: took 4m17.549714658s to wait for apiserver process to appear ...
	I0229 02:35:26.768823  369591 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:35:26.768885  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:26.768949  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:26.816275  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:26.816303  369591 cri.go:89] found id: ""
	I0229 02:35:26.816314  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:26.816379  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.820985  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:26.821062  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:26.870520  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:26.870545  369591 cri.go:89] found id: ""
	I0229 02:35:26.870555  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:26.870613  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.875785  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:26.875869  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:26.926844  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:26.926884  369591 cri.go:89] found id: ""
	I0229 02:35:26.926895  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:26.926963  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.933667  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:26.933747  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:26.988547  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:26.988575  369591 cri.go:89] found id: ""
	I0229 02:35:26.988584  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:26.988645  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:26.994520  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:26.994600  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:27.040568  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:27.040602  369591 cri.go:89] found id: ""
	I0229 02:35:27.040612  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:27.040679  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.046103  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:27.046161  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:27.094322  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:27.094345  369591 cri.go:89] found id: ""
	I0229 02:35:27.094357  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:27.094428  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.101702  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:27.101779  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:27.164549  369591 cri.go:89] found id: ""
	I0229 02:35:27.164584  369591 logs.go:276] 0 containers: []
	W0229 02:35:27.164596  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:27.164604  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:27.164674  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:27.219403  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:27.219431  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:27.219436  369591 cri.go:89] found id: ""
	I0229 02:35:27.219447  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:27.219510  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.226705  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:27.233551  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:27.233576  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:27.281111  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:27.281152  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:27.333686  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:27.333738  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:27.948683  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:27.948736  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:28.018866  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:28.018917  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:28.164820  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:28.164857  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:28.222926  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:28.222963  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:28.265708  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:28.265738  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:28.309311  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:28.309352  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:28.363295  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:28.363341  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:28.384099  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:28.384146  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:28.451988  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:28.452025  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:28.499748  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:28.499783  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:25.586753  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:27.589329  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:27.392846  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:27.419255  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:27.419339  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:27.465294  370051 cri.go:89] found id: ""
	I0229 02:35:27.465325  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.465337  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:27.465345  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:27.465417  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:27.533393  370051 cri.go:89] found id: ""
	I0229 02:35:27.533424  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.533433  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:27.533441  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:27.533510  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:27.587195  370051 cri.go:89] found id: ""
	I0229 02:35:27.587221  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.587232  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:27.587240  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:27.587313  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:27.638597  370051 cri.go:89] found id: ""
	I0229 02:35:27.638624  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.638632  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:27.638639  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:27.638709  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:27.687695  370051 cri.go:89] found id: ""
	I0229 02:35:27.687730  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.687742  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:27.687750  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:27.687825  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:27.732275  370051 cri.go:89] found id: ""
	I0229 02:35:27.732309  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.732320  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:27.732327  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:27.732389  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:27.783069  370051 cri.go:89] found id: ""
	I0229 02:35:27.783109  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.783122  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:27.783133  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:27.783224  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:27.832385  370051 cri.go:89] found id: ""
	I0229 02:35:27.832416  370051 logs.go:276] 0 containers: []
	W0229 02:35:27.832429  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:27.832443  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:27.832460  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:27.902610  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:27.902658  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:27.919900  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:27.919947  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:28.003313  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:28.003337  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:28.003356  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:28.100814  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:28.100853  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:30.654289  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:30.683056  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:30.683141  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:30.734678  370051 cri.go:89] found id: ""
	I0229 02:35:30.734704  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.734712  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:30.734719  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:30.734771  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:30.780792  370051 cri.go:89] found id: ""
	I0229 02:35:30.780821  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.780830  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:30.780837  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:30.780904  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:30.827244  370051 cri.go:89] found id: ""
	I0229 02:35:30.827269  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.827278  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:30.827285  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:30.827336  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:30.871305  370051 cri.go:89] found id: ""
	I0229 02:35:30.871333  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.871342  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:30.871348  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:30.871423  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:30.910095  370051 cri.go:89] found id: ""
	I0229 02:35:30.910121  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.910130  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:30.910136  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:30.910188  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:30.955234  370051 cri.go:89] found id: ""
	I0229 02:35:30.955261  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.955271  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:30.955278  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:30.955345  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:30.996555  370051 cri.go:89] found id: ""
	I0229 02:35:30.996589  370051 logs.go:276] 0 containers: []
	W0229 02:35:30.996602  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:30.996611  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:30.996687  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:31.036424  370051 cri.go:89] found id: ""
	I0229 02:35:31.036454  370051 logs.go:276] 0 containers: []
	W0229 02:35:31.036464  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:31.036474  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:31.036488  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:31.107928  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:31.107987  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:31.125268  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:31.125303  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:31.053142  369591 api_server.go:253] Checking apiserver healthz at https://192.168.72.114:8443/healthz ...
	I0229 02:35:31.060477  369591 api_server.go:279] https://192.168.72.114:8443/healthz returned 200:
	ok
	I0229 02:35:31.062106  369591 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:35:31.062143  369591 api_server.go:131] duration metric: took 4.2933111s to wait for apiserver health ...
	I0229 02:35:31.062154  369591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:35:31.062189  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:31.062278  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:31.119877  369591 cri.go:89] found id: "60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:31.119905  369591 cri.go:89] found id: ""
	I0229 02:35:31.119915  369591 logs.go:276] 1 containers: [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75]
	I0229 02:35:31.119981  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.125569  369591 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:31.125648  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:31.193662  369591 cri.go:89] found id: "92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:31.193693  369591 cri.go:89] found id: ""
	I0229 02:35:31.193704  369591 logs.go:276] 1 containers: [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0]
	I0229 02:35:31.193762  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.199267  369591 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:31.199365  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:31.251832  369591 cri.go:89] found id: "869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:31.251862  369591 cri.go:89] found id: ""
	I0229 02:35:31.251873  369591 logs.go:276] 1 containers: [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab]
	I0229 02:35:31.251935  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.258374  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:31.258477  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:31.309718  369591 cri.go:89] found id: "d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:31.309745  369591 cri.go:89] found id: ""
	I0229 02:35:31.309753  369591 logs.go:276] 1 containers: [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772]
	I0229 02:35:31.309804  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.314949  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:31.315025  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:31.367936  369591 cri.go:89] found id: "1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:31.367960  369591 cri.go:89] found id: ""
	I0229 02:35:31.367970  369591 logs.go:276] 1 containers: [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e]
	I0229 02:35:31.368038  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.373072  369591 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:31.373137  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:31.420362  369591 cri.go:89] found id: "5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:31.420390  369591 cri.go:89] found id: ""
	I0229 02:35:31.420402  369591 logs.go:276] 1 containers: [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5]
	I0229 02:35:31.420470  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.427151  369591 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:31.427221  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:31.482289  369591 cri.go:89] found id: ""
	I0229 02:35:31.482321  369591 logs.go:276] 0 containers: []
	W0229 02:35:31.482333  369591 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:31.482342  369591 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:35:31.482405  369591 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:35:31.526713  369591 cri.go:89] found id: "1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:31.526738  369591 cri.go:89] found id: "3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:31.526744  369591 cri.go:89] found id: ""
	I0229 02:35:31.526755  369591 logs.go:276] 2 containers: [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a]
	I0229 02:35:31.526807  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.531874  369591 ssh_runner.go:195] Run: which crictl
	I0229 02:35:31.536727  369591 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:31.536758  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:31.555901  369591 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:31.555943  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:35:31.689587  369591 logs.go:123] Gathering logs for etcd [92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0] ...
	I0229 02:35:31.689629  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92977e2b17423b35cac8bb47b2690e8762fdd5699c79fc7614121dd26ea926e0"
	I0229 02:35:31.737625  369591 logs.go:123] Gathering logs for coredns [869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab] ...
	I0229 02:35:31.737669  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 869cb90ce44f1ee4b5672c91384c4d3e3a886fd8bdfae1ae5860a3c9b956dfab"
	I0229 02:35:31.781015  369591 logs.go:123] Gathering logs for storage-provisioner [1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35] ...
	I0229 02:35:31.781050  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3ea01e4d00060645e0e699f23167783115509db8cd4852c31aaabec2f0df35"
	I0229 02:35:31.824727  369591 logs.go:123] Gathering logs for storage-provisioner [3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a] ...
	I0229 02:35:31.824757  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c88c68c0c40f2873b89828797f724b36d1c5007ee4a8d5d1218a3ef5633dc0a"
	I0229 02:35:31.866867  369591 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:31.866897  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:31.920324  369591 logs.go:123] Gathering logs for kube-scheduler [d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772] ...
	I0229 02:35:31.920375  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2cd6c6c49c5730c45351ee68507fd53693dee08763e198b8717fab0650ef772"
	I0229 02:35:31.962783  369591 logs.go:123] Gathering logs for kube-proxy [1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e] ...
	I0229 02:35:31.962815  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1061c7e86acebdadf39098784cd9f527df3b45d7317bd8a74ceafa878ef7874e"
	I0229 02:35:32.003525  369591 logs.go:123] Gathering logs for kube-controller-manager [5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5] ...
	I0229 02:35:32.003557  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5520037685c0c181e3c8c3f167c51d87f890c4345795fda4abf8b214c3f777e5"
	I0229 02:35:32.061377  369591 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:32.061417  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:32.454041  369591 logs.go:123] Gathering logs for container status ...
	I0229 02:35:32.454097  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:32.498969  369591 logs.go:123] Gathering logs for kube-apiserver [60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75] ...
	I0229 02:35:32.499006  369591 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60cc548bfcd722928ed8a2712e3c0f174341ff08e9f65cafaed395188b0e4b75"
	I0229 02:35:30.086688  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:32.087795  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:34.585435  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:35.060469  369591 system_pods.go:59] 8 kube-system pods found
	I0229 02:35:35.060503  369591 system_pods.go:61] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running
	I0229 02:35:35.060509  369591 system_pods.go:61] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running
	I0229 02:35:35.060516  369591 system_pods.go:61] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running
	I0229 02:35:35.060521  369591 system_pods.go:61] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running
	I0229 02:35:35.060525  369591 system_pods.go:61] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running
	I0229 02:35:35.060530  369591 system_pods.go:61] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running
	I0229 02:35:35.060538  369591 system_pods.go:61] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:35:35.060543  369591 system_pods.go:61] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running
	I0229 02:35:35.060553  369591 system_pods.go:74] duration metric: took 3.99838967s to wait for pod list to return data ...
	I0229 02:35:35.060563  369591 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:35:35.063638  369591 default_sa.go:45] found service account: "default"
	I0229 02:35:35.063665  369591 default_sa.go:55] duration metric: took 3.094531ms for default service account to be created ...
	I0229 02:35:35.063676  369591 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:35:35.071344  369591 system_pods.go:86] 8 kube-system pods found
	I0229 02:35:35.071366  369591 system_pods.go:89] "coredns-76f75df574-2z5w8" [39b5eb65-690b-488b-9bec-7cfabcc27829] Running
	I0229 02:35:35.071371  369591 system_pods.go:89] "etcd-no-preload-247751" [c1324812-56c9-459e-ba67-7a32973d9b38] Running
	I0229 02:35:35.071375  369591 system_pods.go:89] "kube-apiserver-no-preload-247751" [6b0caf1b-2942-4762-9e6d-fef725a17a28] Running
	I0229 02:35:35.071380  369591 system_pods.go:89] "kube-controller-manager-no-preload-247751" [98d7afde-943b-4d35-b766-f022753bef3c] Running
	I0229 02:35:35.071385  369591 system_pods.go:89] "kube-proxy-cdc4l" [7849f368-0bca-4c2b-ae72-cbacef9bbb72] Running
	I0229 02:35:35.071389  369591 system_pods.go:89] "kube-scheduler-no-preload-247751" [78edaa42-2bb2-4307-880e-885bd4995281] Running
	I0229 02:35:35.071397  369591 system_pods.go:89] "metrics-server-57f55c9bc5-zghwq" [97018e51-c009-4e33-964b-9e9e4798a48a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:35:35.071408  369591 system_pods.go:89] "storage-provisioner" [11ba0e6f-835a-42d6-a7a9-bfafedf7a7d8] Running
	I0229 02:35:35.071420  369591 system_pods.go:126] duration metric: took 7.737446ms to wait for k8s-apps to be running ...
	I0229 02:35:35.071433  369591 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:35:35.071482  369591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:35.091472  369591 system_svc.go:56] duration metric: took 20.031453ms WaitForService to wait for kubelet.
	I0229 02:35:35.091504  369591 kubeadm.go:581] duration metric: took 4m25.872454283s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:35:35.091523  369591 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:35:35.095487  369591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:35:35.095509  369591 node_conditions.go:123] node cpu capacity is 2
	I0229 02:35:35.095546  369591 node_conditions.go:105] duration metric: took 4.018229ms to run NodePressure ...
	I0229 02:35:35.095567  369591 start.go:228] waiting for startup goroutines ...
	I0229 02:35:35.095580  369591 start.go:233] waiting for cluster config update ...
	I0229 02:35:35.095594  369591 start.go:242] writing updated cluster config ...
	I0229 02:35:35.095888  369591 ssh_runner.go:195] Run: rm -f paused
	I0229 02:35:35.154197  369591 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 02:35:35.156089  369591 out.go:177] * Done! kubectl is now configured to use "no-preload-247751" cluster and "default" namespace by default
	W0229 02:35:31.217691  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:31.217717  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:31.217740  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:31.313847  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:31.313883  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:33.861648  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:33.876887  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:33.876954  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:33.921545  370051 cri.go:89] found id: ""
	I0229 02:35:33.921577  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.921588  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:33.921597  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:33.921658  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:33.972558  370051 cri.go:89] found id: ""
	I0229 02:35:33.972584  370051 logs.go:276] 0 containers: []
	W0229 02:35:33.972592  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:33.972599  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:33.972662  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:34.020821  370051 cri.go:89] found id: ""
	I0229 02:35:34.020852  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.020862  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:34.020873  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:34.020937  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:34.064076  370051 cri.go:89] found id: ""
	I0229 02:35:34.064110  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.064121  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:34.064129  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:34.064191  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:34.108523  370051 cri.go:89] found id: ""
	I0229 02:35:34.108557  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.108568  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:34.108576  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:34.108639  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:34.149444  370051 cri.go:89] found id: ""
	I0229 02:35:34.149468  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.149478  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:34.149487  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:34.149562  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:34.193780  370051 cri.go:89] found id: ""
	I0229 02:35:34.193805  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.193814  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:34.193820  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:34.193913  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:34.237088  370051 cri.go:89] found id: ""
	I0229 02:35:34.237118  370051 logs.go:276] 0 containers: []
	W0229 02:35:34.237127  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:34.237137  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:34.237151  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:34.281055  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:34.281091  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:34.333886  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:34.333925  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:34.353163  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:34.353204  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:34.465925  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:34.465951  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:34.465969  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:36.587119  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:39.086456  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:37.049957  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:37.064297  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:35:37.064384  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:35:37.105669  370051 cri.go:89] found id: ""
	I0229 02:35:37.105703  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.105711  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:35:37.105720  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:35:37.105790  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:35:37.143753  370051 cri.go:89] found id: ""
	I0229 02:35:37.143788  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.143799  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:35:37.143808  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:35:37.143880  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:35:37.180126  370051 cri.go:89] found id: ""
	I0229 02:35:37.180157  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.180166  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:35:37.180173  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:35:37.180227  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:35:37.221135  370051 cri.go:89] found id: ""
	I0229 02:35:37.221173  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.221185  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:35:37.221193  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:35:37.221261  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:35:37.258888  370051 cri.go:89] found id: ""
	I0229 02:35:37.258920  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.258932  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:35:37.258940  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:35:37.259005  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:35:37.300970  370051 cri.go:89] found id: ""
	I0229 02:35:37.300998  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.301010  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:35:37.301018  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:35:37.301105  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:35:37.349797  370051 cri.go:89] found id: ""
	I0229 02:35:37.349829  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.349841  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:35:37.349850  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:35:37.349916  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:35:37.408726  370051 cri.go:89] found id: ""
	I0229 02:35:37.408762  370051 logs.go:276] 0 containers: []
	W0229 02:35:37.408773  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:35:37.408787  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:35:37.408805  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:35:37.462030  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:35:37.462064  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:35:37.477836  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:35:37.477868  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:35:37.553886  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:35:37.553924  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:35:37.553941  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:35:37.644637  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:35:37.644683  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:35:40.197937  370051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:35:40.212830  370051 kubeadm.go:640] restartCluster took 4m14.648338345s
	W0229 02:35:40.212984  370051 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 02:35:40.213021  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:35:40.673169  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:40.690108  370051 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:35:40.702424  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:35:40.713782  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:35:40.713832  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:35:40.775345  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:35:40.775527  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:35:40.929045  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:35:40.929185  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:35:40.929310  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:35:41.154311  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:35:41.154449  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:35:41.162905  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:35:41.317651  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:35:41.319260  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:35:41.319358  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:35:41.319458  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:35:41.319564  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:35:41.319675  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:35:41.319772  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:35:41.319857  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:35:41.319963  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:35:41.320066  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:35:41.320166  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:35:41.320289  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:35:41.320357  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:35:41.320439  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:35:41.457291  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:35:41.599703  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:35:41.766344  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:35:41.939397  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:35:41.940740  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:35:41.090698  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:43.585822  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:41.942544  370051 out.go:204]   - Booting up control plane ...
	I0229 02:35:41.942656  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:35:41.946949  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:35:41.949540  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:35:41.950426  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:35:41.953310  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:35:45.586855  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:48.085961  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:50.585602  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:52.587992  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:55.085046  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:57.086710  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:59.590441  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:35:57.264698  369869 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.262039409s)
	I0229 02:35:57.264826  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:35:57.285615  369869 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:35:57.297607  369869 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:35:57.309412  369869 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:35:57.309471  369869 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 02:35:57.540175  369869 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:36:02.086317  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:04.587625  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:06.714158  369869 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 02:36:06.714249  369869 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:36:06.714325  369869 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:36:06.714490  369869 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:36:06.714633  369869 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:36:06.714742  369869 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:36:06.716059  369869 out.go:204]   - Generating certificates and keys ...
	I0229 02:36:06.716160  369869 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:36:06.716250  369869 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:36:06.716357  369869 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:36:06.716434  369869 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:36:06.716508  369869 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:36:06.716572  369869 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:36:06.716649  369869 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:36:06.716722  369869 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:36:06.716824  369869 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:36:06.716952  369869 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:36:06.717008  369869 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:36:06.717080  369869 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:36:06.717147  369869 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:36:06.717221  369869 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:36:06.717298  369869 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:36:06.717367  369869 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:36:06.717474  369869 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:36:06.717559  369869 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:36:06.718770  369869 out.go:204]   - Booting up control plane ...
	I0229 02:36:06.718866  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:36:06.718983  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:36:06.719074  369869 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:36:06.719230  369869 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:36:06.719364  369869 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:36:06.719431  369869 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 02:36:06.719628  369869 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:36:06.719749  369869 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503520 seconds
	I0229 02:36:06.719906  369869 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 02:36:06.720060  369869 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 02:36:06.720126  369869 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 02:36:06.720344  369869 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-071485 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 02:36:06.720433  369869 kubeadm.go:322] [bootstrap-token] Using token: oueq3v.8ghuyl6sece1tffl
	I0229 02:36:06.721973  369869 out.go:204]   - Configuring RBAC rules ...
	I0229 02:36:06.722107  369869 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 02:36:06.722252  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 02:36:06.722444  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 02:36:06.722643  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 02:36:06.722793  369869 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 02:36:06.722937  369869 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 02:36:06.723081  369869 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 02:36:06.723119  369869 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 02:36:06.723188  369869 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 02:36:06.723198  369869 kubeadm.go:322] 
	I0229 02:36:06.723285  369869 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 02:36:06.723310  369869 kubeadm.go:322] 
	I0229 02:36:06.723426  369869 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 02:36:06.723436  369869 kubeadm.go:322] 
	I0229 02:36:06.723467  369869 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 02:36:06.723556  369869 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 02:36:06.723637  369869 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 02:36:06.723646  369869 kubeadm.go:322] 
	I0229 02:36:06.723713  369869 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 02:36:06.723722  369869 kubeadm.go:322] 
	I0229 02:36:06.723799  369869 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 02:36:06.723809  369869 kubeadm.go:322] 
	I0229 02:36:06.723869  369869 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 02:36:06.723979  369869 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 02:36:06.724073  369869 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 02:36:06.724083  369869 kubeadm.go:322] 
	I0229 02:36:06.724178  369869 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 02:36:06.724269  369869 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 02:36:06.724279  369869 kubeadm.go:322] 
	I0229 02:36:06.724389  369869 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token oueq3v.8ghuyl6sece1tffl \
	I0229 02:36:06.724520  369869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 \
	I0229 02:36:06.724552  369869 kubeadm.go:322] 	--control-plane 
	I0229 02:36:06.724560  369869 kubeadm.go:322] 
	I0229 02:36:06.724665  369869 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 02:36:06.724675  369869 kubeadm.go:322] 
	I0229 02:36:06.724767  369869 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token oueq3v.8ghuyl6sece1tffl \
	I0229 02:36:06.724923  369869 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fb79a3e911cbb8d39d5ee1280f12c65f1b4d97462280a461286b718d06017e37 
	I0229 02:36:06.724941  369869 cni.go:84] Creating CNI manager for ""
	I0229 02:36:06.724952  369869 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:36:06.726566  369869 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:36:07.088398  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:09.587442  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:06.727880  369869 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:36:06.786343  369869 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:36:06.842349  369869 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:36:06.842420  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=default-k8s-diff-port-071485 minikube.k8s.io/updated_at=2024_02_29T02_36_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:06.842428  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:07.196763  369869 ops.go:34] apiserver oom_adj: -16
	I0229 02:36:07.196958  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:07.696991  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:08.197336  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:08.697155  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:09.197955  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:09.697107  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:10.197816  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.085528  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:14.085852  369508 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:10.697486  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:11.197744  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:11.697179  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.197614  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:12.697015  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:13.197983  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:13.697315  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:14.196982  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:14.698012  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:15.197896  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:15.697895  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:16.197062  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:16.697819  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:17.197222  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:17.697031  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.197683  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.697094  369869 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 02:36:18.870924  369869 kubeadm.go:1088] duration metric: took 12.028572011s to wait for elevateKubeSystemPrivileges.
	I0229 02:36:18.870961  369869 kubeadm.go:406] StartCluster complete in 5m19.353203226s
	I0229 02:36:18.870986  369869 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:36:18.871077  369869 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:36:18.873654  369869 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:36:18.873954  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:36:18.874041  369869 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:36:18.874118  369869 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874130  369869 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874142  369869 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.874149  369869 addons.go:243] addon storage-provisioner should already be in state true
	I0229 02:36:18.874152  369869 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-071485"
	I0229 02:36:18.874201  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.874256  369869 config.go:182] Loaded profile config "default-k8s-diff-port-071485": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:36:18.874341  369869 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-071485"
	I0229 02:36:18.874359  369869 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.874367  369869 addons.go:243] addon metrics-server should already be in state true
	I0229 02:36:18.874422  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.874613  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874637  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.874613  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874691  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.874811  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.874846  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.892207  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40661
	I0229 02:36:18.892260  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0229 02:36:18.892967  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.892986  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.893508  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.893528  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.893680  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.893700  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.893936  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.894102  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.894143  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0229 02:36:18.894331  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.894582  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.894594  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.894613  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.895109  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.895143  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.895508  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.896106  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.896142  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.898127  369869 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-071485"
	W0229 02:36:18.898143  369869 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:36:18.898168  369869 host.go:66] Checking if "default-k8s-diff-port-071485" exists ...
	I0229 02:36:18.898482  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.898516  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.917303  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37069
	I0229 02:36:18.917472  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I0229 02:36:18.917747  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.917894  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.918493  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.918510  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.918654  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.918665  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.919012  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.919077  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.919229  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.919754  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.921030  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.922677  369869 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:36:18.921622  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.923872  369869 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:36:18.923899  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:36:18.923919  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.925237  369869 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:36:18.926153  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:36:18.924603  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45577
	I0229 02:36:18.926269  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:36:18.926303  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.927739  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.928184  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.928277  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.928299  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.930032  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.930057  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.930386  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.930456  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.930614  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.930723  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.930914  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.931014  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:18.931133  369869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:36:18.931185  369869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:36:18.931533  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.931553  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.931576  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.931737  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.932033  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.932190  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:18.948311  369869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0229 02:36:18.949328  369869 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:36:18.949793  369869 main.go:141] libmachine: Using API Version  1
	I0229 02:36:18.949819  369869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:36:18.950313  369869 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:36:18.950529  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetState
	I0229 02:36:18.952381  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .DriverName
	I0229 02:36:18.952660  369869 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 02:36:18.952673  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 02:36:18.952689  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHHostname
	I0229 02:36:18.956332  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.956779  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:f9:08", ip: ""} in network mk-default-k8s-diff-port-071485: {Iface:virbr3 ExpiryTime:2024-02-29 03:30:44 +0000 UTC Type:0 Mac:52:54:00:81:f9:08 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:default-k8s-diff-port-071485 Clientid:01:52:54:00:81:f9:08}
	I0229 02:36:18.956808  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | domain default-k8s-diff-port-071485 has defined IP address 192.168.61.233 and MAC address 52:54:00:81:f9:08 in network mk-default-k8s-diff-port-071485
	I0229 02:36:18.957117  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHPort
	I0229 02:36:18.957313  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHKeyPath
	I0229 02:36:18.957425  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .GetSSHUsername
	I0229 02:36:18.957485  369869 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/default-k8s-diff-port-071485/id_rsa Username:docker}
	I0229 02:36:19.128114  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:36:19.141619  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 02:36:19.141649  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 02:36:19.169945  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 02:36:19.187099  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 02:36:19.187124  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 02:36:19.211358  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 02:36:19.289856  369869 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:36:19.289880  369869 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 02:36:19.398720  369869 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 02:36:19.414512  369869 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-071485" context rescaled to 1 replicas
	I0229 02:36:19.414562  369869 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.233 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:36:19.416389  369869 out.go:177] * Verifying Kubernetes components...
	I0229 02:36:15.586606  369508 pod_ready.go:81] duration metric: took 4m0.008250092s waiting for pod "metrics-server-57f55c9bc5-6p7f7" in "kube-system" namespace to be "Ready" ...
	E0229 02:36:15.586638  369508 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:36:15.586648  369508 pod_ready.go:38] duration metric: took 4m5.573018241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:36:15.586669  369508 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:36:15.586707  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:15.586771  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:15.644937  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:15.644969  369508 cri.go:89] found id: ""
	I0229 02:36:15.644980  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:15.645054  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.653058  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:15.653137  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:15.709225  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:15.709254  369508 cri.go:89] found id: ""
	I0229 02:36:15.709264  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:15.709333  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.715304  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:15.715391  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:15.769593  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:15.769627  369508 cri.go:89] found id: ""
	I0229 02:36:15.769637  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:15.769702  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.775157  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:15.775230  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:15.820002  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:15.820030  369508 cri.go:89] found id: ""
	I0229 02:36:15.820040  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:15.820105  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.827058  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:15.827122  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:15.875030  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:15.875063  369508 cri.go:89] found id: ""
	I0229 02:36:15.875074  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:15.875142  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.880489  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:15.880555  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:15.929452  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:15.929476  369508 cri.go:89] found id: ""
	I0229 02:36:15.929484  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:15.929545  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:15.934321  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:15.934396  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:15.981960  369508 cri.go:89] found id: ""
	I0229 02:36:15.981997  369508 logs.go:276] 0 containers: []
	W0229 02:36:15.982006  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:15.982014  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:15.982077  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:16.034169  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:16.034196  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:16.034201  369508 cri.go:89] found id: ""
	I0229 02:36:16.034210  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:16.034281  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:16.039463  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:16.044719  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:16.044748  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:16.111048  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:16.111084  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:16.278784  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:16.278832  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:16.333048  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:16.333085  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:16.376514  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:16.376555  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:16.420840  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:16.420944  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:16.468273  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:16.468308  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:16.526001  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:16.526043  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:16.569084  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:16.569120  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:16.609818  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:16.609847  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:16.660979  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:16.661019  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:16.677397  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:16.677432  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:16.732421  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:16.732464  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:19.417788  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:36:21.277741  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.107753576s)
	I0229 02:36:21.277802  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.277815  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.277840  369869 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.066425449s)
	I0229 02:36:21.277873  369869 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0229 02:36:21.277840  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.149690589s)
	I0229 02:36:21.277908  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.277918  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278277  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.278323  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278331  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.278339  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.278351  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278445  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278458  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.278465  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.278474  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.278519  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.278592  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.278603  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.280452  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.280470  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.280482  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.300880  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.300907  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.301193  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.301217  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.572633  369869 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.154816183s)
	I0229 02:36:21.572676  369869 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-071485" to be "Ready" ...
	I0229 02:36:21.572635  369869 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.173852857s)
	I0229 02:36:21.572814  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.572842  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.573153  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) DBG | Closing plugin on server side
	I0229 02:36:21.573207  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.573215  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.573228  369869 main.go:141] libmachine: Making call to close driver server
	I0229 02:36:21.573236  369869 main.go:141] libmachine: (default-k8s-diff-port-071485) Calling .Close
	I0229 02:36:21.573538  369869 main.go:141] libmachine: Successfully made call to close driver server
	I0229 02:36:21.573575  369869 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 02:36:21.573587  369869 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-071485"
	I0229 02:36:21.575111  369869 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0229 02:36:19.738493  369508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:36:19.758171  369508 api_server.go:72] duration metric: took 4m17.008228834s to wait for apiserver process to appear ...
	I0229 02:36:19.758199  369508 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:36:19.758281  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:19.758349  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:19.811042  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:19.811071  369508 cri.go:89] found id: ""
	I0229 02:36:19.811082  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:19.811145  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.817952  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:19.818034  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:19.871006  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:19.871033  369508 cri.go:89] found id: ""
	I0229 02:36:19.871043  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:19.871109  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.877440  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:19.877512  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:19.928043  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:19.928071  369508 cri.go:89] found id: ""
	I0229 02:36:19.928081  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:19.928142  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.935299  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:19.935363  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:19.977360  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:19.977391  369508 cri.go:89] found id: ""
	I0229 02:36:19.977402  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:19.977482  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:19.982361  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:19.982442  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:20.025903  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:20.025931  369508 cri.go:89] found id: ""
	I0229 02:36:20.025941  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:20.026012  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.031390  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:20.031477  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:20.080768  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:20.080792  369508 cri.go:89] found id: ""
	I0229 02:36:20.080800  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:20.080864  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.087322  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:20.087388  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:20.139067  369508 cri.go:89] found id: ""
	I0229 02:36:20.139111  369508 logs.go:276] 0 containers: []
	W0229 02:36:20.139124  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:20.139132  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:20.139195  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:20.193052  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:20.193085  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:20.193091  369508 cri.go:89] found id: ""
	I0229 02:36:20.193101  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:20.193174  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.199740  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:20.205385  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:20.205414  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:20.360843  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:20.360894  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:20.411077  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:20.411113  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:20.459855  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:20.459910  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:20.517056  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:20.517101  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:20.568151  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:20.568185  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:20.637131  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:20.637165  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:21.144933  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:21.144980  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:21.206565  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:21.206607  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:21.257071  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:21.257118  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:21.315541  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:21.315589  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:21.358630  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:21.358665  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:21.398170  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:21.398201  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:23.914059  369508 api_server.go:253] Checking apiserver healthz at https://192.168.50.218:8443/healthz ...
	I0229 02:36:23.923854  369508 api_server.go:279] https://192.168.50.218:8443/healthz returned 200:
	ok
	I0229 02:36:23.926443  369508 api_server.go:141] control plane version: v1.28.4
	I0229 02:36:23.926466  369508 api_server.go:131] duration metric: took 4.168260413s to wait for apiserver health ...
	I0229 02:36:23.926475  369508 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:36:23.926506  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:36:23.926566  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:36:24.013825  369508 cri.go:89] found id: "74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:24.013849  369508 cri.go:89] found id: ""
	I0229 02:36:24.013857  369508 logs.go:276] 1 containers: [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226]
	I0229 02:36:24.013913  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.019432  369508 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:36:24.019506  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:36:24.078857  369508 cri.go:89] found id: "208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:24.078877  369508 cri.go:89] found id: ""
	I0229 02:36:24.078885  369508 logs.go:276] 1 containers: [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121]
	I0229 02:36:24.078945  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.083761  369508 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:36:24.083822  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:36:24.133681  369508 cri.go:89] found id: "6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:24.133707  369508 cri.go:89] found id: ""
	I0229 02:36:24.133717  369508 logs.go:276] 1 containers: [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c]
	I0229 02:36:24.133779  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.139165  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:36:24.139228  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:36:24.185863  369508 cri.go:89] found id: "57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:24.185883  369508 cri.go:89] found id: ""
	I0229 02:36:24.185892  369508 logs.go:276] 1 containers: [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d]
	I0229 02:36:24.185939  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.191094  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:36:24.191164  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:36:24.232922  369508 cri.go:89] found id: "8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:24.232953  369508 cri.go:89] found id: ""
	I0229 02:36:24.232963  369508 logs.go:276] 1 containers: [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb]
	I0229 02:36:24.233031  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.238154  369508 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:36:24.238252  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:36:24.280735  369508 cri.go:89] found id: "8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:24.280760  369508 cri.go:89] found id: ""
	I0229 02:36:24.280769  369508 logs.go:276] 1 containers: [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa]
	I0229 02:36:24.280842  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.285497  369508 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:36:24.285558  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:36:24.324979  369508 cri.go:89] found id: ""
	I0229 02:36:24.325007  369508 logs.go:276] 0 containers: []
	W0229 02:36:24.325016  369508 logs.go:278] No container was found matching "kindnet"
	I0229 02:36:24.325022  369508 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:36:24.325085  369508 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:36:24.370875  369508 cri.go:89] found id: "5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:24.370908  369508 cri.go:89] found id: "4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:24.370912  369508 cri.go:89] found id: ""
	I0229 02:36:24.370919  369508 logs.go:276] 2 containers: [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5]
	I0229 02:36:24.370973  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.378247  369508 ssh_runner.go:195] Run: which crictl
	I0229 02:36:24.382856  369508 logs.go:123] Gathering logs for etcd [208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121] ...
	I0229 02:36:24.382899  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 208354e254f6ca3a15b27af6853e69dd85da9467854f581777c3e7139f905121"
	I0229 02:36:24.430889  369508 logs.go:123] Gathering logs for kube-scheduler [57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d] ...
	I0229 02:36:24.430919  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57de9d45eaff6fa3e12814cee8450f961c26c7e49e42509d9255fe264acb924d"
	I0229 02:36:24.470370  369508 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:36:24.470407  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:36:21.576300  369869 addons.go:505] enable addons completed in 2.702258052s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0229 02:36:21.582468  369869 node_ready.go:49] node "default-k8s-diff-port-071485" has status "Ready":"True"
	I0229 02:36:21.582494  369869 node_ready.go:38] duration metric: took 9.804213ms waiting for node "default-k8s-diff-port-071485" to be "Ready" ...
	I0229 02:36:21.582506  369869 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:36:21.608694  369869 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.125662  369869 pod_ready.go:92] pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.125695  369869 pod_ready.go:81] duration metric: took 1.51697387s waiting for pod "coredns-5dd5756b68-xj4sh" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.125707  369869 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.141831  369869 pod_ready.go:92] pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.141855  369869 pod_ready.go:81] duration metric: took 16.140002ms waiting for pod "etcd-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.141864  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.154216  369869 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.154261  369869 pod_ready.go:81] duration metric: took 12.389751ms waiting for pod "kube-apiserver-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.154276  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.166057  369869 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.166085  369869 pod_ready.go:81] duration metric: took 11.798242ms waiting for pod "kube-controller-manager-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.166098  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gr44w" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.179414  369869 pod_ready.go:92] pod "kube-proxy-gr44w" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.179437  369869 pod_ready.go:81] duration metric: took 13.331411ms waiting for pod "kube-proxy-gr44w" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.179447  369869 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.576569  369869 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace has status "Ready":"True"
	I0229 02:36:23.576597  369869 pod_ready.go:81] duration metric: took 397.142516ms waiting for pod "kube-scheduler-default-k8s-diff-port-071485" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:23.576611  369869 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace to be "Ready" ...
	I0229 02:36:21.953781  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:36:21.954431  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:21.954685  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:24.880947  369508 logs.go:123] Gathering logs for container status ...
	I0229 02:36:24.880985  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:36:24.939045  369508 logs.go:123] Gathering logs for kube-proxy [8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb] ...
	I0229 02:36:24.939079  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f95a3a0ad6f6befa39c001ba9d2b50402d47b5979698fa004d860460ac4daeb"
	I0229 02:36:24.987109  369508 logs.go:123] Gathering logs for kube-controller-manager [8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa] ...
	I0229 02:36:24.987144  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcb33bb23e696dc1d7d0da0fff1ec1c3421b34a2b09a0c27c8efcc94467a7fa"
	I0229 02:36:25.049095  369508 logs.go:123] Gathering logs for storage-provisioner [5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f] ...
	I0229 02:36:25.049131  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d03e33e30323fd56fe51fc542f35bbe523e374beb2f316f83f4033ea43cb19f"
	I0229 02:36:25.091654  369508 logs.go:123] Gathering logs for kubelet ...
	I0229 02:36:25.091686  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 02:36:25.153281  369508 logs.go:123] Gathering logs for dmesg ...
	I0229 02:36:25.153326  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:36:25.169544  369508 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:36:25.169575  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:36:25.294469  369508 logs.go:123] Gathering logs for kube-apiserver [74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226] ...
	I0229 02:36:25.294504  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74bd751559a70c01f201e189630ce6ae5923a2f7aef1b686847d2e2f73a7e226"
	I0229 02:36:25.346867  369508 logs.go:123] Gathering logs for coredns [6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c] ...
	I0229 02:36:25.346900  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f79a4150c635ed4a2eb5b1221aa9b3e30c99f098068645e780a8ceb45eb9e1c"
	I0229 02:36:25.388876  369508 logs.go:123] Gathering logs for storage-provisioner [4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5] ...
	I0229 02:36:25.388921  369508 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d79154ed71a07fcdd7574dfd2a38fa21f974584dd78e4797517059d3db904f5"
	I0229 02:36:27.937848  369508 system_pods.go:59] 8 kube-system pods found
	I0229 02:36:27.937878  369508 system_pods.go:61] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running
	I0229 02:36:27.937883  369508 system_pods.go:61] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running
	I0229 02:36:27.937888  369508 system_pods.go:61] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running
	I0229 02:36:27.937891  369508 system_pods.go:61] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running
	I0229 02:36:27.937894  369508 system_pods.go:61] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:36:27.937898  369508 system_pods.go:61] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running
	I0229 02:36:27.937903  369508 system_pods.go:61] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:36:27.937908  369508 system_pods.go:61] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:36:27.937922  369508 system_pods.go:74] duration metric: took 4.011440564s to wait for pod list to return data ...
	I0229 02:36:27.937933  369508 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:36:27.940602  369508 default_sa.go:45] found service account: "default"
	I0229 02:36:27.940623  369508 default_sa.go:55] duration metric: took 2.681589ms for default service account to be created ...
	I0229 02:36:27.940632  369508 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:36:27.947433  369508 system_pods.go:86] 8 kube-system pods found
	I0229 02:36:27.947455  369508 system_pods.go:89] "coredns-5dd5756b68-kt28m" [faf7edc3-f4db-4d5e-ad63-ccbec64dfac4] Running
	I0229 02:36:27.947466  369508 system_pods.go:89] "etcd-embed-certs-915633" [8dceb199-bfbd-4a9f-ab44-dd45464fa697] Running
	I0229 02:36:27.947472  369508 system_pods.go:89] "kube-apiserver-embed-certs-915633" [07d8b93a-3020-4929-aaaa-8de4135bcc4e] Running
	I0229 02:36:27.947482  369508 system_pods.go:89] "kube-controller-manager-embed-certs-915633" [5bac3555-8ede-4e64-b823-c957275e8da2] Running
	I0229 02:36:27.947491  369508 system_pods.go:89] "kube-proxy-6tt7l" [6e8eb713-a0cf-49f3-b93d-7493a9d763ca] Running
	I0229 02:36:27.947497  369508 system_pods.go:89] "kube-scheduler-embed-certs-915633" [77e768be-593b-4353-b497-55316a40cbb4] Running
	I0229 02:36:27.947508  369508 system_pods.go:89] "metrics-server-57f55c9bc5-6p7f7" [b1dc8143-2d47-4cea-b4a1-61808350d2d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:36:27.947518  369508 system_pods.go:89] "storage-provisioner" [ce36b3fe-a726-46f1-a411-c8e26d3b051a] Running
	I0229 02:36:27.947531  369508 system_pods.go:126] duration metric: took 6.892538ms to wait for k8s-apps to be running ...
	I0229 02:36:27.947539  369508 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:36:27.947591  369508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:36:27.965730  369508 system_svc.go:56] duration metric: took 18.181663ms WaitForService to wait for kubelet.
	I0229 02:36:27.965756  369508 kubeadm.go:581] duration metric: took 4m25.215820473s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:36:27.965780  369508 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:36:27.970094  369508 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:36:27.970123  369508 node_conditions.go:123] node cpu capacity is 2
	I0229 02:36:27.970138  369508 node_conditions.go:105] duration metric: took 4.347423ms to run NodePressure ...
	I0229 02:36:27.970152  369508 start.go:228] waiting for startup goroutines ...
	I0229 02:36:27.970162  369508 start.go:233] waiting for cluster config update ...
	I0229 02:36:27.970175  369508 start.go:242] writing updated cluster config ...
	I0229 02:36:27.970529  369508 ssh_runner.go:195] Run: rm -f paused
	I0229 02:36:28.020686  369508 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:36:28.022730  369508 out.go:177] * Done! kubectl is now configured to use "embed-certs-915633" cluster and "default" namespace by default
	I0229 02:36:25.585985  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:28.085278  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:26.954801  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:26.955093  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:30.583462  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:32.584198  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:34.585129  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:37.085551  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:39.584450  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:36.955344  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:36.955543  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:36:41.585000  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:44.083919  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:46.085694  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:48.583474  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:50.584026  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:53.084622  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:55.084729  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:57.084941  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:59.586329  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:36:56.957911  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:36:56.958178  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:02.085189  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:04.085672  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:06.586906  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:09.085130  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:11.583811  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:13.585179  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:16.083670  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:18.084884  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:20.584395  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:22.585487  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:24.586088  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:26.586608  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:29.084644  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:31.585292  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:34.083690  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:36.959509  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:37:36.959795  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:37:36.959812  370051 kubeadm.go:322] 
	I0229 02:37:36.959848  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:37:36.959887  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:37:36.959893  370051 kubeadm.go:322] 
	I0229 02:37:36.959937  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:37:36.959991  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:37:36.960142  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:37:36.960167  370051 kubeadm.go:322] 
	I0229 02:37:36.960282  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:37:36.960318  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:37:36.960362  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:37:36.960371  370051 kubeadm.go:322] 
	I0229 02:37:36.960482  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:37:36.960617  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:37:36.960756  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:37:36.960839  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:37:36.960951  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:37:36.961015  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:37:36.961366  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:37:36.961507  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:37:36.961616  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 02:37:36.961763  370051 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 02:37:36.961835  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0229 02:37:37.427665  370051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:37:37.443045  370051 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:37:37.456937  370051 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:37:37.456979  370051 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 02:37:37.529093  370051 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 02:37:37.529246  370051 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 02:37:37.670260  370051 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 02:37:37.670417  370051 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 02:37:37.670548  370051 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 02:37:37.904220  370051 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 02:37:37.905569  370051 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 02:37:37.914919  370051 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 02:37:38.070911  370051 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 02:37:38.072738  370051 out.go:204]   - Generating certificates and keys ...
	I0229 02:37:38.072860  370051 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 02:37:38.072951  370051 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 02:37:38.073049  370051 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 02:37:38.073132  370051 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 02:37:38.073230  370051 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 02:37:38.073299  370051 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 02:37:38.073376  370051 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 02:37:38.073458  370051 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 02:37:38.073566  370051 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 02:37:38.073680  370051 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 02:37:38.073720  370051 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 02:37:38.073794  370051 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 02:37:38.209805  370051 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 02:37:38.305550  370051 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 02:37:38.464715  370051 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 02:37:38.623139  370051 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 02:37:38.624364  370051 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 02:37:36.084556  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:38.086561  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:38.625883  370051 out.go:204]   - Booting up control plane ...
	I0229 02:37:38.626039  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 02:37:38.630668  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 02:37:38.631740  370051 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 02:37:38.632687  370051 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 02:37:38.636043  370051 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 02:37:40.583589  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:42.583968  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:44.584409  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:46.586413  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:49.084223  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:51.584770  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:53.584871  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:55.585299  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:37:58.084753  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:00.584432  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:03.085511  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:05.585519  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:08.085774  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:10.087984  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:12.584744  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:15.085757  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:17.584807  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:19.588130  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:18.637746  370051 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 02:38:18.638616  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:18.638883  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:22.084442  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:24.085227  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:23.639374  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:23.639613  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:26.087774  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:28.584872  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:30.587375  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:33.085060  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:35.086106  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:33.640169  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:33.640468  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:37.584670  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:40.085797  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:42.585365  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:44.587079  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:46.590638  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:49.086500  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:51.584286  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:53.587405  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:53.640871  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:38:53.641147  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:38:56.084551  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:38:58.085668  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:00.086247  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:02.588854  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:05.085163  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:07.090885  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:09.583687  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:11.585184  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:14.085800  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:16.086643  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:18.584073  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:21.084992  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:23.585496  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:25.586111  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:28.086464  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:33.642813  370051 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 02:39:33.643083  370051 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 02:39:33.643099  370051 kubeadm.go:322] 
	I0229 02:39:33.643153  370051 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 02:39:33.643206  370051 kubeadm.go:322] 	timed out waiting for the condition
	I0229 02:39:33.643213  370051 kubeadm.go:322] 
	I0229 02:39:33.643252  370051 kubeadm.go:322] This error is likely caused by:
	I0229 02:39:33.643296  370051 kubeadm.go:322] 	- The kubelet is not running
	I0229 02:39:33.643443  370051 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 02:39:33.643455  370051 kubeadm.go:322] 
	I0229 02:39:33.643605  370051 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 02:39:33.643655  370051 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 02:39:33.643700  370051 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 02:39:33.643714  370051 kubeadm.go:322] 
	I0229 02:39:33.643871  370051 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 02:39:33.644040  370051 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 02:39:33.644193  370051 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 02:39:33.644272  370051 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 02:39:33.644371  370051 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 02:39:33.644412  370051 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 02:39:33.644855  370051 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 02:39:33.644972  370051 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 02:39:33.645065  370051 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 02:39:33.645132  370051 kubeadm.go:406] StartCluster complete in 8m8.138449101s
	I0229 02:39:33.645178  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:39:33.645255  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:39:33.699121  370051 cri.go:89] found id: ""
	I0229 02:39:33.699154  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.699166  370051 logs.go:278] No container was found matching "kube-apiserver"
	I0229 02:39:33.699174  370051 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:39:33.699240  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:39:33.747229  370051 cri.go:89] found id: ""
	I0229 02:39:33.747260  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.747272  370051 logs.go:278] No container was found matching "etcd"
	I0229 02:39:33.747279  370051 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:39:33.747349  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:39:33.789303  370051 cri.go:89] found id: ""
	I0229 02:39:33.789334  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.789343  370051 logs.go:278] No container was found matching "coredns"
	I0229 02:39:33.789350  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:39:33.789413  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:39:33.832769  370051 cri.go:89] found id: ""
	I0229 02:39:33.832801  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.832814  370051 logs.go:278] No container was found matching "kube-scheduler"
	I0229 02:39:33.832824  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:39:33.832891  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:39:33.881508  370051 cri.go:89] found id: ""
	I0229 02:39:33.881543  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.881554  370051 logs.go:278] No container was found matching "kube-proxy"
	I0229 02:39:33.881571  370051 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:39:33.881635  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:39:33.941691  370051 cri.go:89] found id: ""
	I0229 02:39:33.941728  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.941740  370051 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 02:39:33.941749  370051 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:39:33.941822  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:39:33.990639  370051 cri.go:89] found id: ""
	I0229 02:39:33.990681  370051 logs.go:276] 0 containers: []
	W0229 02:39:33.990704  370051 logs.go:278] No container was found matching "kindnet"
	I0229 02:39:33.990713  370051 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0229 02:39:33.990774  370051 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0229 02:39:34.038426  370051 cri.go:89] found id: ""
	I0229 02:39:34.038460  370051 logs.go:276] 0 containers: []
	W0229 02:39:34.038470  370051 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 02:39:34.038480  370051 logs.go:123] Gathering logs for dmesg ...
	I0229 02:39:34.038497  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:39:34.054571  370051 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:39:34.054604  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 02:39:34.131297  370051 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 02:39:34.131323  370051 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:39:34.131337  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:39:34.232302  370051 logs.go:123] Gathering logs for container status ...
	I0229 02:39:34.232349  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:39:34.283314  370051 logs.go:123] Gathering logs for kubelet ...
	I0229 02:39:34.283351  370051 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:39:34.336858  370051 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 02:39:34.336920  370051 out.go:239] * 
	W0229 02:39:34.336985  370051 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.337006  370051 out.go:239] * 
	W0229 02:39:34.337787  370051 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 02:39:34.340744  370051 out.go:177] 
	W0229 02:39:34.342096  370051 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 02:39:34.342137  370051 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 02:39:34.342160  370051 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 02:39:34.343540  370051 out.go:177] 
	I0229 02:39:30.584963  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:32.585599  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:34.588073  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:37.085513  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:39.584721  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:41.585072  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:44.086996  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:46.587437  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:49.083819  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:51.084472  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:53.085522  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:55.585518  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:39:58.084454  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:00.085075  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:02.588500  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:05.083707  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:07.084423  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:09.584552  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:11.590611  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:14.084618  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:16.597479  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:19.086312  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:21.586450  369869 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace has status "Ready":"False"
	I0229 02:40:23.583798  369869 pod_ready.go:81] duration metric: took 4m0.007166298s waiting for pod "metrics-server-57f55c9bc5-fpwzl" in "kube-system" namespace to be "Ready" ...
	E0229 02:40:23.583824  369869 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 02:40:23.583834  369869 pod_ready.go:38] duration metric: took 4m2.001316522s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 02:40:23.583860  369869 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:40:23.583899  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:23.584002  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:23.655958  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:23.655987  369869 cri.go:89] found id: ""
	I0229 02:40:23.655997  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:23.656057  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.661125  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:23.661199  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:23.712373  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:23.712400  369869 cri.go:89] found id: ""
	I0229 02:40:23.712410  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:23.712508  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.718149  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:23.718209  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:23.775835  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:23.775858  369869 cri.go:89] found id: ""
	I0229 02:40:23.775867  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:23.775923  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.780698  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:23.780792  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:23.825914  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:23.825939  369869 cri.go:89] found id: ""
	I0229 02:40:23.825949  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:23.826017  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.830870  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:23.830932  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:23.868737  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:23.868767  369869 cri.go:89] found id: ""
	I0229 02:40:23.868777  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:23.868841  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.873522  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:23.873598  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:23.918640  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:23.918663  369869 cri.go:89] found id: ""
	I0229 02:40:23.918671  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:23.918725  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:23.923456  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:23.923517  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:23.963045  369869 cri.go:89] found id: ""
	I0229 02:40:23.963071  369869 logs.go:276] 0 containers: []
	W0229 02:40:23.963080  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:23.963085  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:23.963136  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:24.006380  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:24.006402  369869 cri.go:89] found id: ""
	I0229 02:40:24.006409  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:24.006459  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:24.012228  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:24.012269  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:24.095110  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:24.095354  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:24.117199  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:24.117229  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:24.181064  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:24.181126  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:24.239267  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:24.239305  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:24.283248  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:24.283281  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:24.746786  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:24.746831  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:24.764451  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:24.764487  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:24.917582  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:24.917625  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:24.980095  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:24.980142  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:25.028219  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:25.028253  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:25.083840  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:25.083874  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:25.131148  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:25.131179  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:25.179314  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:25.179340  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:25.179415  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:25.179432  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:25.179455  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:25.179471  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:25.179479  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:35.181209  369869 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:40:35.199982  369869 api_server.go:72] duration metric: took 4m15.785374734s to wait for apiserver process to appear ...
	I0229 02:40:35.200012  369869 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:40:35.200052  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:35.200109  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:35.241760  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:35.241786  369869 cri.go:89] found id: ""
	I0229 02:40:35.241795  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:35.241846  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.247188  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:35.247294  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:35.293992  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:35.294022  369869 cri.go:89] found id: ""
	I0229 02:40:35.294033  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:35.294098  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.298900  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:35.298971  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:35.340809  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:35.340835  369869 cri.go:89] found id: ""
	I0229 02:40:35.340843  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:35.340903  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.345913  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:35.345988  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:35.392027  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:35.392061  369869 cri.go:89] found id: ""
	I0229 02:40:35.392072  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:35.392140  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.397043  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:35.397120  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:35.452900  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:35.452931  369869 cri.go:89] found id: ""
	I0229 02:40:35.452942  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:35.453014  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.459221  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:35.459303  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:35.503530  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:35.503555  369869 cri.go:89] found id: ""
	I0229 02:40:35.503563  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:35.503615  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.509021  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:35.509083  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:35.553777  369869 cri.go:89] found id: ""
	I0229 02:40:35.553803  369869 logs.go:276] 0 containers: []
	W0229 02:40:35.553812  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:35.553818  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:35.553868  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:35.605234  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:35.605259  369869 cri.go:89] found id: ""
	I0229 02:40:35.605267  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:35.605333  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:35.610433  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:35.610465  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:36.030757  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:36.030807  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:36.047193  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:36.047224  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:36.105936  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:36.105983  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:36.169028  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:36.169080  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:36.241640  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:36.241678  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:36.284787  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:36.284822  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:36.333264  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:36.333293  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:36.385436  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:36.385468  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:36.463289  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:36.463491  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:36.485748  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:36.485782  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:36.604181  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:36.604218  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:36.659210  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:36.659247  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:36.704612  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:36.704640  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:36.704695  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:36.704706  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:36.704712  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:36.704719  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:36.704726  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:46.705868  369869 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8444/healthz ...
	I0229 02:40:46.711301  369869 api_server.go:279] https://192.168.61.233:8444/healthz returned 200:
	ok
	I0229 02:40:46.713000  369869 api_server.go:141] control plane version: v1.28.4
	I0229 02:40:46.713025  369869 api_server.go:131] duration metric: took 11.513005073s to wait for apiserver health ...
	I0229 02:40:46.713034  369869 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:40:46.713061  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0229 02:40:46.713121  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0229 02:40:46.759486  369869 cri.go:89] found id: "f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:46.759505  369869 cri.go:89] found id: ""
	I0229 02:40:46.759517  369869 logs.go:276] 1 containers: [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2]
	I0229 02:40:46.759581  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.764215  369869 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0229 02:40:46.764299  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0229 02:40:46.805016  369869 cri.go:89] found id: "da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:46.805042  369869 cri.go:89] found id: ""
	I0229 02:40:46.805049  369869 logs.go:276] 1 containers: [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861]
	I0229 02:40:46.805113  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.810213  369869 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0229 02:40:46.810284  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0229 02:40:46.862825  369869 cri.go:89] found id: "450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:46.862855  369869 cri.go:89] found id: ""
	I0229 02:40:46.862867  369869 logs.go:276] 1 containers: [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694]
	I0229 02:40:46.862923  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.867531  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0229 02:40:46.867588  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0229 02:40:46.914211  369869 cri.go:89] found id: "15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:46.914247  369869 cri.go:89] found id: ""
	I0229 02:40:46.914258  369869 logs.go:276] 1 containers: [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349]
	I0229 02:40:46.914327  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.918857  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0229 02:40:46.918921  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0229 02:40:46.959981  369869 cri.go:89] found id: "44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:46.960016  369869 cri.go:89] found id: ""
	I0229 02:40:46.960027  369869 logs.go:276] 1 containers: [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f]
	I0229 02:40:46.960095  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:46.964789  369869 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0229 02:40:46.964846  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0229 02:40:47.009289  369869 cri.go:89] found id: "817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:47.009313  369869 cri.go:89] found id: ""
	I0229 02:40:47.009322  369869 logs.go:276] 1 containers: [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9]
	I0229 02:40:47.009390  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:47.015339  369869 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0229 02:40:47.015413  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0229 02:40:47.059195  369869 cri.go:89] found id: ""
	I0229 02:40:47.059227  369869 logs.go:276] 0 containers: []
	W0229 02:40:47.059239  369869 logs.go:278] No container was found matching "kindnet"
	I0229 02:40:47.059254  369869 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0229 02:40:47.059306  369869 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0229 02:40:47.103293  369869 cri.go:89] found id: "01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:47.103323  369869 cri.go:89] found id: ""
	I0229 02:40:47.103334  369869 logs.go:276] 1 containers: [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02]
	I0229 02:40:47.103401  369869 ssh_runner.go:195] Run: which crictl
	I0229 02:40:47.108048  369869 logs.go:123] Gathering logs for storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] ...
	I0229 02:40:47.108076  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02"
	I0229 02:40:47.157407  369869 logs.go:123] Gathering logs for CRI-O ...
	I0229 02:40:47.157441  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0229 02:40:47.591202  369869 logs.go:123] Gathering logs for container status ...
	I0229 02:40:47.591261  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 02:40:47.644877  369869 logs.go:123] Gathering logs for describe nodes ...
	I0229 02:40:47.644910  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 02:40:47.784217  369869 logs.go:123] Gathering logs for kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] ...
	I0229 02:40:47.784249  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2"
	I0229 02:40:47.839113  369869 logs.go:123] Gathering logs for kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] ...
	I0229 02:40:47.839144  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349"
	I0229 02:40:47.885581  369869 logs.go:123] Gathering logs for kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] ...
	I0229 02:40:47.885616  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f"
	I0229 02:40:47.930971  369869 logs.go:123] Gathering logs for kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] ...
	I0229 02:40:47.931009  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9"
	I0229 02:40:47.986352  369869 logs.go:123] Gathering logs for kubelet ...
	I0229 02:40:47.986437  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 02:40:48.067103  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:48.067316  369869 logs.go:138] Found kubelet problem: Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:48.088373  369869 logs.go:123] Gathering logs for dmesg ...
	I0229 02:40:48.088407  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 02:40:48.105750  369869 logs.go:123] Gathering logs for etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] ...
	I0229 02:40:48.105781  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861"
	I0229 02:40:48.154640  369869 logs.go:123] Gathering logs for coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] ...
	I0229 02:40:48.154677  369869 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694"
	I0229 02:40:48.196009  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:48.196042  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 02:40:48.196112  369869 out.go:239] X Problems detected in kubelet:
	W0229 02:40:48.196128  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: W0229 02:36:20.408330    3725 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	W0229 02:40:48.196137  369869 out.go:239]   Feb 29 02:36:20 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:36:20.408361    3725 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-071485" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-071485' and this object
	I0229 02:40:48.196146  369869 out.go:304] Setting ErrFile to fd 2...
	I0229 02:40:48.196155  369869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:40:58.203822  369869 system_pods.go:59] 8 kube-system pods found
	I0229 02:40:58.203853  369869 system_pods.go:61] "coredns-5dd5756b68-xj4sh" [e2741c05-81b2-4de6-8329-f88912d48160] Running
	I0229 02:40:58.203859  369869 system_pods.go:61] "etcd-default-k8s-diff-port-071485" [88b0e865-c53e-4829-a56a-2a3b6e405df4] Running
	I0229 02:40:58.203866  369869 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071485" [445fa1c9-589b-437d-92ca-0d15ee8228ae] Running
	I0229 02:40:58.203872  369869 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071485" [e3f60cdb-6214-4987-b692-a4921ece3895] Running
	I0229 02:40:58.203877  369869 system_pods.go:61] "kube-proxy-gr44w" [a74b553f-683a-4e1b-ac48-b4553d00b306] Running
	I0229 02:40:58.203881  369869 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071485" [4c1afe85-10be-45e5-8b99-6bd3cf12a828] Running
	I0229 02:40:58.203888  369869 system_pods.go:61] "metrics-server-57f55c9bc5-fpwzl" [5215d27e-4bf2-4331-89f2-24096dc96b90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:40:58.203893  369869 system_pods.go:61] "storage-provisioner" [d7b70f8e-1689-4526-a39f-eb8005cbecd2] Running
	I0229 02:40:58.203902  369869 system_pods.go:74] duration metric: took 11.49086169s to wait for pod list to return data ...
	I0229 02:40:58.203913  369869 default_sa.go:34] waiting for default service account to be created ...
	I0229 02:40:58.207120  369869 default_sa.go:45] found service account: "default"
	I0229 02:40:58.207145  369869 default_sa.go:55] duration metric: took 3.22533ms for default service account to be created ...
	I0229 02:40:58.207154  369869 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 02:40:58.213026  369869 system_pods.go:86] 8 kube-system pods found
	I0229 02:40:58.213056  369869 system_pods.go:89] "coredns-5dd5756b68-xj4sh" [e2741c05-81b2-4de6-8329-f88912d48160] Running
	I0229 02:40:58.213065  369869 system_pods.go:89] "etcd-default-k8s-diff-port-071485" [88b0e865-c53e-4829-a56a-2a3b6e405df4] Running
	I0229 02:40:58.213073  369869 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071485" [445fa1c9-589b-437d-92ca-0d15ee8228ae] Running
	I0229 02:40:58.213081  369869 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071485" [e3f60cdb-6214-4987-b692-a4921ece3895] Running
	I0229 02:40:58.213088  369869 system_pods.go:89] "kube-proxy-gr44w" [a74b553f-683a-4e1b-ac48-b4553d00b306] Running
	I0229 02:40:58.213094  369869 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071485" [4c1afe85-10be-45e5-8b99-6bd3cf12a828] Running
	I0229 02:40:58.213107  369869 system_pods.go:89] "metrics-server-57f55c9bc5-fpwzl" [5215d27e-4bf2-4331-89f2-24096dc96b90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:40:58.213117  369869 system_pods.go:89] "storage-provisioner" [d7b70f8e-1689-4526-a39f-eb8005cbecd2] Running
	I0229 02:40:58.213130  369869 system_pods.go:126] duration metric: took 5.970128ms to wait for k8s-apps to be running ...
	I0229 02:40:58.213142  369869 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 02:40:58.213204  369869 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:40:58.230150  369869 system_svc.go:56] duration metric: took 16.998299ms WaitForService to wait for kubelet.
	I0229 02:40:58.230178  369869 kubeadm.go:581] duration metric: took 4m38.815578079s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 02:40:58.230245  369869 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:40:58.233660  369869 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:40:58.233719  369869 node_conditions.go:123] node cpu capacity is 2
	I0229 02:40:58.233737  369869 node_conditions.go:105] duration metric: took 3.486117ms to run NodePressure ...
	I0229 02:40:58.233756  369869 start.go:228] waiting for startup goroutines ...
	I0229 02:40:58.233766  369869 start.go:233] waiting for cluster config update ...
	I0229 02:40:58.233777  369869 start.go:242] writing updated cluster config ...
	I0229 02:40:58.234079  369869 ssh_runner.go:195] Run: rm -f paused
	I0229 02:40:58.285415  369869 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 02:40:58.287433  369869 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-071485" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.668650515Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175009668621258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9d27bdc-756b-48ef-8f00-d339d19a8f33 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.669134584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c43b1c5-05fb-4ec8-896c-cd460e67a4ee name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.669248710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c43b1c5-05fb-4ec8-896c-cd460e67a4ee name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.669284846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6c43b1c5-05fb-4ec8-896c-cd460e67a4ee name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.710486783Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27196489-73f6-4cd0-b3f7-f508a1969ec1 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.710564615Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27196489-73f6-4cd0-b3f7-f508a1969ec1 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.714097221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb53101e-ec26-4c3c-9dc3-eb2924f29047 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.714789580Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175009714736790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb53101e-ec26-4c3c-9dc3-eb2924f29047 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.715739234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da6dedd7-3266-4240-b4c3-14dc7e50549d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.715849440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da6dedd7-3266-4240-b4c3-14dc7e50549d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.715908204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=da6dedd7-3266-4240-b4c3-14dc7e50549d name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.756715606Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07a2e615-687c-4544-af63-8dd06f828127 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.756868268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07a2e615-687c-4544-af63-8dd06f828127 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.758034091Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e5a318c-4202-43b1-98bf-5a94b98957e8 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.758573855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175009758531745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e5a318c-4202-43b1-98bf-5a94b98957e8 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.759164244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=700dbfc8-3be1-4012-b575-910a60ce51e7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.759304946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=700dbfc8-3be1-4012-b575-910a60ce51e7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.759350025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=700dbfc8-3be1-4012-b575-910a60ce51e7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.798126125Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96387ad2-c99b-4b3a-88f2-bb70cb9defbf name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.798276470Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96387ad2-c99b-4b3a-88f2-bb70cb9defbf name=/runtime.v1.RuntimeService/Version
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.799537134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89d389ac-bad3-4f5d-b527-086a4137aa17 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.799936445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175009799911760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:105088,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89d389ac-bad3-4f5d-b527-086a4137aa17 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.800605825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64e1fa14-bd74-4efa-a5e8-4154770aab5f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.800689439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64e1fa14-bd74-4efa-a5e8-4154770aab5f name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:50:09 old-k8s-version-275488 crio[644]: time="2024-02-29 02:50:09.800728955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=64e1fa14-bd74-4efa-a5e8-4154770aab5f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 02:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052077] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045395] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.718888] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Feb29 02:31] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.696519] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.748716] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.071940] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.086978] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.246454] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.137859] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.350900] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[ +17.818498] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.668154] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[Feb29 02:35] systemd-fstab-generator[8056]: Ignoring "noauto" option for root device
	[  +0.077012] kauditd_printk_skb: 15 callbacks suppressed
	[Feb29 02:37] systemd-fstab-generator[9745]: Ignoring "noauto" option for root device
	[  +0.066300] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:50:10 up 19 min,  0 users,  load average: 0.00, 0.03, 0.08
	Linux old-k8s-version-275488 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 02:50:08 old-k8s-version-275488 kubelet[20403]: F0229 02:50:08.426797   20403 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:50:08 old-k8s-version-275488 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:50:08 old-k8s-version-275488 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:50:09 old-k8s-version-275488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1001.
	Feb 29 02:50:09 old-k8s-version-275488 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:50:09 old-k8s-version-275488 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:50:09 old-k8s-version-275488 kubelet[20415]: I0229 02:50:09.222527   20415 server.go:410] Version: v1.16.0
	Feb 29 02:50:09 old-k8s-version-275488 kubelet[20415]: I0229 02:50:09.222858   20415 plugins.go:100] No cloud provider specified.
	Feb 29 02:50:09 old-k8s-version-275488 kubelet[20415]: I0229 02:50:09.222875   20415 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:50:09 old-k8s-version-275488 kubelet[20415]: I0229 02:50:09.225642   20415 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:50:09 old-k8s-version-275488 kubelet[20415]: W0229 02:50:09.229119   20415 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:50:09 old-k8s-version-275488 kubelet[20415]: F0229 02:50:09.229329   20415 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:50:09 old-k8s-version-275488 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:50:09 old-k8s-version-275488 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:50:09 old-k8s-version-275488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1002.
	Feb 29 02:50:09 old-k8s-version-275488 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:50:09 old-k8s-version-275488 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:50:09 old-k8s-version-275488 kubelet[20474]: I0229 02:50:09.951327   20474 server.go:410] Version: v1.16.0
	Feb 29 02:50:09 old-k8s-version-275488 kubelet[20474]: I0229 02:50:09.951583   20474 plugins.go:100] No cloud provider specified.
	Feb 29 02:50:09 old-k8s-version-275488 kubelet[20474]: I0229 02:50:09.951594   20474 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:50:09 old-k8s-version-275488 kubelet[20474]: I0229 02:50:09.954002   20474 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:50:09 old-k8s-version-275488 kubelet[20474]: W0229 02:50:09.954993   20474 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:50:09 old-k8s-version-275488 kubelet[20474]: F0229 02:50:09.955086   20474 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:50:09 old-k8s-version-275488 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:50:09 old-k8s-version-275488 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275488 -n old-k8s-version-275488
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-275488 -n old-k8s-version-275488: exit status 2 (274.171394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-275488" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (89.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (125.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071485 -n default-k8s-diff-port-071485
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-02-29 02:52:06.396692949 +0000 UTC m=+6084.089839871
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-071485 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-071485 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.162µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-071485 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071485 -n default-k8s-diff-port-071485
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-071485 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-071485 logs -n 25: (1.401183583s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:22 UTC | 29 Feb 24 02:23 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-915633            | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-247751             | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-071485  | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC | 29 Feb 24 02:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:23 UTC |                     |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-275488        | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-915633                 | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-247751                  | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:36 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:25 UTC | 29 Feb 24 02:35 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-071485       | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-071485 | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:40 UTC |
	|         | default-k8s-diff-port-071485                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-275488             | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC | 29 Feb 24 02:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-275488                              | old-k8s-version-275488       | jenkins | v1.32.0 | 29 Feb 24 02:50 UTC | 29 Feb 24 02:50 UTC |
	| start   | -p newest-cni-052502 --memory=2200 --alsologtostderr   | newest-cni-052502            | jenkins | v1.32.0 | 29 Feb 24 02:50 UTC | 29 Feb 24 02:51 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-247751                                   | no-preload-247751            | jenkins | v1.32.0 | 29 Feb 24 02:50 UTC | 29 Feb 24 02:50 UTC |
	| addons  | enable metrics-server -p newest-cni-052502             | newest-cni-052502            | jenkins | v1.32.0 | 29 Feb 24 02:51 UTC | 29 Feb 24 02:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-915633                                  | embed-certs-915633           | jenkins | v1.32.0 | 29 Feb 24 02:51 UTC | 29 Feb 24 02:51 UTC |
	| stop    | -p newest-cni-052502                                   | newest-cni-052502            | jenkins | v1.32.0 | 29 Feb 24 02:51 UTC | 29 Feb 24 02:51 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-052502                  | newest-cni-052502            | jenkins | v1.32.0 | 29 Feb 24 02:51 UTC | 29 Feb 24 02:51 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-052502 --memory=2200 --alsologtostderr   | newest-cni-052502            | jenkins | v1.32.0 | 29 Feb 24 02:51 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 02:51:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 02:51:21.985654  375707 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:51:21.985772  375707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:51:21.985781  375707 out.go:304] Setting ErrFile to fd 2...
	I0229 02:51:21.985785  375707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:51:21.985964  375707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:51:21.986525  375707 out.go:298] Setting JSON to false
	I0229 02:51:21.987488  375707 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9225,"bootTime":1709165857,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:51:21.987557  375707 start.go:139] virtualization: kvm guest
	I0229 02:51:21.989534  375707 out.go:177] * [newest-cni-052502] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:51:21.990767  375707 notify.go:220] Checking for updates...
	I0229 02:51:21.990774  375707 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:51:21.992048  375707 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:51:21.993265  375707 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:51:21.994566  375707 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:51:21.995733  375707 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:51:21.996828  375707 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:51:21.998358  375707 config.go:182] Loaded profile config "newest-cni-052502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:51:21.998745  375707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:51:21.998791  375707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:51:22.013722  375707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0229 02:51:22.014107  375707 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:51:22.014803  375707 main.go:141] libmachine: Using API Version  1
	I0229 02:51:22.014851  375707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:51:22.015285  375707 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:51:22.015454  375707 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:51:22.015723  375707 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:51:22.016020  375707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:51:22.016063  375707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:51:22.030613  375707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39939
	I0229 02:51:22.030975  375707 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:51:22.031444  375707 main.go:141] libmachine: Using API Version  1
	I0229 02:51:22.031474  375707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:51:22.031800  375707 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:51:22.031990  375707 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:51:22.066654  375707 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 02:51:22.067834  375707 start.go:299] selected driver: kvm2
	I0229 02:51:22.067845  375707 start.go:903] validating driver "kvm2" against &{Name:newest-cni-052502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-052502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_r
eady:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:51:22.067959  375707 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:51:22.068665  375707 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:51:22.068735  375707 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 02:51:22.084265  375707 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 02:51:22.084746  375707 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0229 02:51:22.084845  375707 cni.go:84] Creating CNI manager for ""
	I0229 02:51:22.084864  375707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:51:22.084882  375707 start_flags.go:323] config:
	{Name:newest-cni-052502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-052502 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Exposed
Ports:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:51:22.085102  375707 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 02:51:22.086817  375707 out.go:177] * Starting control plane node newest-cni-052502 in cluster newest-cni-052502
	I0229 02:51:22.088007  375707 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:51:22.088052  375707 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0229 02:51:22.088063  375707 cache.go:56] Caching tarball of preloaded images
	I0229 02:51:22.088161  375707 preload.go:174] Found /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0229 02:51:22.088173  375707 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0229 02:51:22.088322  375707 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/config.json ...
	I0229 02:51:22.088556  375707 start.go:365] acquiring machines lock for newest-cni-052502: {Name:mk054f668739379f9a67a6c82b7639150486ee84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 02:51:22.088638  375707 start.go:369] acquired machines lock for "newest-cni-052502" in 57.643µs
	I0229 02:51:22.088655  375707 start.go:96] Skipping create...Using existing machine configuration
	I0229 02:51:22.088660  375707 fix.go:54] fixHost starting: 
	I0229 02:51:22.088961  375707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:51:22.088992  375707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:51:22.103941  375707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34847
	I0229 02:51:22.104347  375707 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:51:22.104842  375707 main.go:141] libmachine: Using API Version  1
	I0229 02:51:22.104864  375707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:51:22.105196  375707 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:51:22.105385  375707 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:51:22.105536  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetState
	I0229 02:51:22.107260  375707 fix.go:102] recreateIfNeeded on newest-cni-052502: state=Stopped err=<nil>
	I0229 02:51:22.107297  375707 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	W0229 02:51:22.107472  375707 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 02:51:22.109226  375707 out.go:177] * Restarting existing kvm2 VM for "newest-cni-052502" ...
	I0229 02:51:22.110368  375707 main.go:141] libmachine: (newest-cni-052502) Calling .Start
	I0229 02:51:22.110558  375707 main.go:141] libmachine: (newest-cni-052502) Ensuring networks are active...
	I0229 02:51:22.111278  375707 main.go:141] libmachine: (newest-cni-052502) Ensuring network default is active
	I0229 02:51:22.111564  375707 main.go:141] libmachine: (newest-cni-052502) Ensuring network mk-newest-cni-052502 is active
	I0229 02:51:22.111927  375707 main.go:141] libmachine: (newest-cni-052502) Getting domain xml...
	I0229 02:51:22.112813  375707 main.go:141] libmachine: (newest-cni-052502) Creating domain...
	I0229 02:51:23.320958  375707 main.go:141] libmachine: (newest-cni-052502) Waiting to get IP...
	I0229 02:51:23.321845  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:23.322370  375707 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:51:23.322413  375707 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:51:23.322335  375742 retry.go:31] will retry after 287.429703ms: waiting for machine to come up
	I0229 02:51:23.611862  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:23.612323  375707 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:51:23.612347  375707 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:51:23.612272  375742 retry.go:31] will retry after 241.331817ms: waiting for machine to come up
	I0229 02:51:23.855747  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:23.856127  375707 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:51:23.856155  375707 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:51:23.856092  375742 retry.go:31] will retry after 352.971318ms: waiting for machine to come up
	I0229 02:51:24.210605  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:24.210984  375707 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:51:24.211014  375707 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:51:24.210940  375742 retry.go:31] will retry after 450.622425ms: waiting for machine to come up
	I0229 02:51:24.663549  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:24.664045  375707 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:51:24.664068  375707 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:51:24.664017  375742 retry.go:31] will retry after 724.286189ms: waiting for machine to come up
	I0229 02:51:25.390253  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:25.390694  375707 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:51:25.390721  375707 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:51:25.390642  375742 retry.go:31] will retry after 751.39326ms: waiting for machine to come up
	I0229 02:51:26.143449  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:26.143896  375707 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:51:26.143926  375707 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:51:26.143848  375742 retry.go:31] will retry after 987.016332ms: waiting for machine to come up
	I0229 02:51:27.132589  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:27.133051  375707 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:51:27.133089  375707 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:51:27.132981  375742 retry.go:31] will retry after 1.272261217s: waiting for machine to come up
	I0229 02:51:28.406619  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:28.407099  375707 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:51:28.407133  375707 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:51:28.407047  375742 retry.go:31] will retry after 1.356261305s: waiting for machine to come up
	I0229 02:51:29.765561  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:29.765927  375707 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:51:29.765979  375707 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:51:29.765879  375742 retry.go:31] will retry after 1.543785607s: waiting for machine to come up
	I0229 02:51:31.311449  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:31.311881  375707 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:51:31.311912  375707 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:51:31.311813  375742 retry.go:31] will retry after 2.461498809s: waiting for machine to come up
	I0229 02:51:33.774642  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:33.775232  375707 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:51:33.775258  375707 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:51:33.775187  375742 retry.go:31] will retry after 2.587371995s: waiting for machine to come up
	I0229 02:51:36.364482  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:36.364932  375707 main.go:141] libmachine: (newest-cni-052502) DBG | unable to find current IP address of domain newest-cni-052502 in network mk-newest-cni-052502
	I0229 02:51:36.364960  375707 main.go:141] libmachine: (newest-cni-052502) DBG | I0229 02:51:36.364884  375742 retry.go:31] will retry after 4.521276729s: waiting for machine to come up
	I0229 02:51:40.890344  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:40.890869  375707 main.go:141] libmachine: (newest-cni-052502) Found IP for machine: 192.168.39.3
	I0229 02:51:40.890893  375707 main.go:141] libmachine: (newest-cni-052502) Reserving static IP address...
	I0229 02:51:40.890907  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has current primary IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:40.891361  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "newest-cni-052502", mac: "52:54:00:19:fc:ef", ip: "192.168.39.3"} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:40.891409  375707 main.go:141] libmachine: (newest-cni-052502) DBG | skip adding static IP to network mk-newest-cni-052502 - found existing host DHCP lease matching {name: "newest-cni-052502", mac: "52:54:00:19:fc:ef", ip: "192.168.39.3"}
	I0229 02:51:40.891422  375707 main.go:141] libmachine: (newest-cni-052502) Reserved static IP address: 192.168.39.3
	I0229 02:51:40.891437  375707 main.go:141] libmachine: (newest-cni-052502) Waiting for SSH to be available...
	I0229 02:51:40.891451  375707 main.go:141] libmachine: (newest-cni-052502) DBG | Getting to WaitForSSH function...
	I0229 02:51:40.893751  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:40.894278  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:40.894322  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:40.894520  375707 main.go:141] libmachine: (newest-cni-052502) DBG | Using SSH client type: external
	I0229 02:51:40.894550  375707 main.go:141] libmachine: (newest-cni-052502) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa (-rw-------)
	I0229 02:51:40.894608  375707 main.go:141] libmachine: (newest-cni-052502) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 02:51:40.894628  375707 main.go:141] libmachine: (newest-cni-052502) DBG | About to run SSH command:
	I0229 02:51:40.894642  375707 main.go:141] libmachine: (newest-cni-052502) DBG | exit 0
	I0229 02:51:41.014663  375707 main.go:141] libmachine: (newest-cni-052502) DBG | SSH cmd err, output: <nil>: 
	I0229 02:51:41.015062  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetConfigRaw
	I0229 02:51:41.015824  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetIP
	I0229 02:51:41.018631  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.018976  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:41.019002  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.019216  375707 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/config.json ...
	I0229 02:51:41.019412  375707 machine.go:88] provisioning docker machine ...
	I0229 02:51:41.019430  375707 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:51:41.019631  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetMachineName
	I0229 02:51:41.019807  375707 buildroot.go:166] provisioning hostname "newest-cni-052502"
	I0229 02:51:41.019833  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetMachineName
	I0229 02:51:41.019981  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:51:41.022367  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.022662  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:41.022683  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.022817  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:51:41.022973  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:51:41.023131  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:51:41.023249  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:51:41.023438  375707 main.go:141] libmachine: Using SSH client type: native
	I0229 02:51:41.023631  375707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:51:41.023644  375707 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-052502 && echo "newest-cni-052502" | sudo tee /etc/hostname
	I0229 02:51:41.138811  375707 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-052502
	
	I0229 02:51:41.138870  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:51:41.141544  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.141887  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:41.141915  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.142067  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:51:41.142289  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:51:41.142490  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:51:41.142629  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:51:41.142818  375707 main.go:141] libmachine: Using SSH client type: native
	I0229 02:51:41.143054  375707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:51:41.143090  375707 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-052502' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-052502/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-052502' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 02:51:41.252343  375707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 02:51:41.252384  375707 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-316644/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-316644/.minikube}
	I0229 02:51:41.252433  375707 buildroot.go:174] setting up certificates
	I0229 02:51:41.252454  375707 provision.go:83] configureAuth start
	I0229 02:51:41.252474  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetMachineName
	I0229 02:51:41.252824  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetIP
	I0229 02:51:41.255548  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.255931  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:41.255972  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.256144  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:51:41.258176  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.258560  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:41.258594  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.258662  375707 provision.go:138] copyHostCerts
	I0229 02:51:41.258745  375707 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem, removing ...
	I0229 02:51:41.258783  375707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem
	I0229 02:51:41.258863  375707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/ca.pem (1082 bytes)
	I0229 02:51:41.258968  375707 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem, removing ...
	I0229 02:51:41.258980  375707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem
	I0229 02:51:41.259024  375707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/cert.pem (1123 bytes)
	I0229 02:51:41.259106  375707 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem, removing ...
	I0229 02:51:41.259117  375707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem
	I0229 02:51:41.259149  375707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-316644/.minikube/key.pem (1675 bytes)
	I0229 02:51:41.259214  375707 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem org=jenkins.newest-cni-052502 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube newest-cni-052502]
	I0229 02:51:41.382345  375707 provision.go:172] copyRemoteCerts
	I0229 02:51:41.382421  375707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 02:51:41.382451  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:51:41.385395  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.385803  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:41.385832  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.385997  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:51:41.386250  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:51:41.386448  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:51:41.386644  375707 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa Username:docker}
	I0229 02:51:41.471736  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0229 02:51:41.500474  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 02:51:41.528541  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 02:51:41.556341  375707 provision.go:86] duration metric: configureAuth took 303.850175ms
	I0229 02:51:41.556367  375707 buildroot.go:189] setting minikube options for container-runtime
	I0229 02:51:41.556578  375707 config.go:182] Loaded profile config "newest-cni-052502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:51:41.556664  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:51:41.559586  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.559924  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:41.559961  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.560131  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:51:41.560374  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:51:41.560567  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:51:41.560701  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:51:41.560877  375707 main.go:141] libmachine: Using SSH client type: native
	I0229 02:51:41.561054  375707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:51:41.561068  375707 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0229 02:51:41.829921  375707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0229 02:51:41.829950  375707 machine.go:91] provisioned docker machine in 810.525319ms
	I0229 02:51:41.829965  375707 start.go:300] post-start starting for "newest-cni-052502" (driver="kvm2")
	I0229 02:51:41.829983  375707 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 02:51:41.830023  375707 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:51:41.830385  375707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 02:51:41.830418  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:51:41.833361  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.833744  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:41.833795  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.833956  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:51:41.834152  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:51:41.834329  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:51:41.834513  375707 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa Username:docker}
	I0229 02:51:41.914879  375707 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 02:51:41.919581  375707 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 02:51:41.919608  375707 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/addons for local assets ...
	I0229 02:51:41.919667  375707 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-316644/.minikube/files for local assets ...
	I0229 02:51:41.919754  375707 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem -> 3238852.pem in /etc/ssl/certs
	I0229 02:51:41.919882  375707 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 02:51:41.930983  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:51:41.957877  375707 start.go:303] post-start completed in 127.896052ms
	I0229 02:51:41.957907  375707 fix.go:56] fixHost completed within 19.869246737s
	I0229 02:51:41.957932  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:51:41.960708  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.961086  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:41.961115  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:41.961276  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:51:41.961509  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:51:41.961652  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:51:41.961828  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:51:41.962001  375707 main.go:141] libmachine: Using SSH client type: native
	I0229 02:51:41.962156  375707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0229 02:51:41.962165  375707 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 02:51:42.059540  375707 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709175102.028908221
	
	I0229 02:51:42.059573  375707 fix.go:206] guest clock: 1709175102.028908221
	I0229 02:51:42.059583  375707 fix.go:219] Guest: 2024-02-29 02:51:42.028908221 +0000 UTC Remote: 2024-02-29 02:51:41.957911645 +0000 UTC m=+20.020664035 (delta=70.996576ms)
	I0229 02:51:42.059636  375707 fix.go:190] guest clock delta is within tolerance: 70.996576ms
	I0229 02:51:42.059646  375707 start.go:83] releasing machines lock for "newest-cni-052502", held for 19.97099648s
	I0229 02:51:42.059678  375707 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:51:42.059980  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetIP
	I0229 02:51:42.062894  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:42.063256  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:42.063283  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:42.063489  375707 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:51:42.063955  375707 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:51:42.064152  375707 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:51:42.064272  375707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 02:51:42.064330  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:51:42.064444  375707 ssh_runner.go:195] Run: cat /version.json
	I0229 02:51:42.064473  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:51:42.067089  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:42.067117  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:42.067463  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:42.067498  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:42.067520  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:42.067536  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:42.067682  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:51:42.067803  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:51:42.067879  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:51:42.067971  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:51:42.068029  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:51:42.068100  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:51:42.068170  375707 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa Username:docker}
	I0229 02:51:42.068225  375707 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/newest-cni-052502/id_rsa Username:docker}
	I0229 02:51:42.148357  375707 ssh_runner.go:195] Run: systemctl --version
	I0229 02:51:42.168627  375707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0229 02:51:42.316342  375707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 02:51:42.323361  375707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 02:51:42.323436  375707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 02:51:42.340723  375707 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 02:51:42.340751  375707 start.go:475] detecting cgroup driver to use...
	I0229 02:51:42.340875  375707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 02:51:42.361652  375707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 02:51:42.377768  375707 docker.go:217] disabling cri-docker service (if available) ...
	I0229 02:51:42.377832  375707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0229 02:51:42.393689  375707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0229 02:51:42.409242  375707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0229 02:51:42.547232  375707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0229 02:51:42.706990  375707 docker.go:233] disabling docker service ...
	I0229 02:51:42.707070  375707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0229 02:51:42.724171  375707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0229 02:51:42.739671  375707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0229 02:51:42.878101  375707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0229 02:51:43.021074  375707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 02:51:43.036648  375707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 02:51:43.057329  375707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0229 02:51:43.057401  375707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:51:43.068894  375707 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0229 02:51:43.068971  375707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:51:43.079948  375707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:51:43.091749  375707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0229 02:51:43.102378  375707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 02:51:43.113430  375707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 02:51:43.122929  375707 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0229 02:51:43.122986  375707 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0229 02:51:43.143204  375707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 02:51:43.157586  375707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 02:51:43.290188  375707 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0229 02:51:43.434651  375707 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0229 02:51:43.434743  375707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0229 02:51:43.440860  375707 start.go:543] Will wait 60s for crictl version
	I0229 02:51:43.440924  375707 ssh_runner.go:195] Run: which crictl
	I0229 02:51:43.445378  375707 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 02:51:43.481358  375707 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0229 02:51:43.481440  375707 ssh_runner.go:195] Run: crio --version
	I0229 02:51:43.512343  375707 ssh_runner.go:195] Run: crio --version
	I0229 02:51:43.546313  375707 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0229 02:51:43.547535  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetIP
	I0229 02:51:43.550316  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:43.550716  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:51:43.550760  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:51:43.551012  375707 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 02:51:43.555586  375707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:51:43.571389  375707 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0229 02:51:43.572873  375707 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 02:51:43.572941  375707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:51:43.612687  375707 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0229 02:51:43.612768  375707 ssh_runner.go:195] Run: which lz4
	I0229 02:51:43.617434  375707 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 02:51:43.622542  375707 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 02:51:43.622579  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0229 02:51:45.324690  375707 crio.go:444] Took 1.707282 seconds to copy over tarball
	I0229 02:51:45.324801  375707 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 02:51:48.086788  375707 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.761950571s)
	I0229 02:51:48.086830  375707 crio.go:451] Took 2.762109 seconds to extract the tarball
	I0229 02:51:48.086844  375707 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 02:51:48.129087  375707 ssh_runner.go:195] Run: sudo crictl images --output json
	I0229 02:51:48.176006  375707 crio.go:496] all images are preloaded for cri-o runtime.
	I0229 02:51:48.176044  375707 cache_images.go:84] Images are preloaded, skipping loading
	I0229 02:51:48.176130  375707 ssh_runner.go:195] Run: crio config
	I0229 02:51:48.228733  375707 cni.go:84] Creating CNI manager for ""
	I0229 02:51:48.228756  375707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:51:48.228780  375707 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0229 02:51:48.228799  375707 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-052502 NodeName:newest-cni-052502 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:m
ap[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 02:51:48.228954  375707 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-052502"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 02:51:48.229028  375707 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-052502 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-052502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 02:51:48.229083  375707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 02:51:48.241059  375707 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 02:51:48.241136  375707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 02:51:48.252260  375707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (417 bytes)
	I0229 02:51:48.271639  375707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 02:51:48.289737  375707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I0229 02:51:48.311301  375707 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0229 02:51:48.316267  375707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 02:51:48.331343  375707 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502 for IP: 192.168.39.3
	I0229 02:51:48.331385  375707 certs.go:190] acquiring lock for shared ca certs: {Name:mkd6f028899ecf4e22eb84fb292b7189bb17fab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:51:48.331602  375707 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key
	I0229 02:51:48.331646  375707 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key
	I0229 02:51:48.331734  375707 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/client.key
	I0229 02:51:48.331804  375707 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key.599d509e
	I0229 02:51:48.331861  375707 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.key
	I0229 02:51:48.332014  375707 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem (1338 bytes)
	W0229 02:51:48.332064  375707 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885_empty.pem, impossibly tiny 0 bytes
	I0229 02:51:48.332080  375707 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca-key.pem (1679 bytes)
	I0229 02:51:48.332117  375707 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/ca.pem (1082 bytes)
	I0229 02:51:48.332156  375707 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/cert.pem (1123 bytes)
	I0229 02:51:48.332189  375707 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/certs/home/jenkins/minikube-integration/18063-316644/.minikube/certs/key.pem (1675 bytes)
	I0229 02:51:48.332244  375707 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem (1708 bytes)
	I0229 02:51:48.332998  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 02:51:48.361353  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 02:51:48.388416  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 02:51:48.415568  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/newest-cni-052502/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 02:51:48.443231  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 02:51:48.471428  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 02:51:48.499035  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 02:51:48.528314  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 02:51:48.556028  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/ssl/certs/3238852.pem --> /usr/share/ca-certificates/3238852.pem (1708 bytes)
	I0229 02:51:48.582867  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 02:51:48.609136  375707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-316644/.minikube/certs/323885.pem --> /usr/share/ca-certificates/323885.pem (1338 bytes)
	I0229 02:51:48.635399  375707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 02:51:48.655281  375707 ssh_runner.go:195] Run: openssl version
	I0229 02:51:48.663293  375707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3238852.pem && ln -fs /usr/share/ca-certificates/3238852.pem /etc/ssl/certs/3238852.pem"
	I0229 02:51:48.676884  375707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3238852.pem
	I0229 02:51:48.682292  375707 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:21 /usr/share/ca-certificates/3238852.pem
	I0229 02:51:48.682353  375707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3238852.pem
	I0229 02:51:48.689228  375707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3238852.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 02:51:48.702374  375707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 02:51:48.715750  375707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:51:48.721099  375707 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:51:48.721179  375707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 02:51:48.727784  375707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 02:51:48.740329  375707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/323885.pem && ln -fs /usr/share/ca-certificates/323885.pem /etc/ssl/certs/323885.pem"
	I0229 02:51:48.753147  375707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/323885.pem
	I0229 02:51:48.758131  375707 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:21 /usr/share/ca-certificates/323885.pem
	I0229 02:51:48.758194  375707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/323885.pem
	I0229 02:51:48.764592  375707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/323885.pem /etc/ssl/certs/51391683.0"
	I0229 02:51:48.776855  375707 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 02:51:48.781838  375707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 02:51:48.788413  375707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 02:51:48.795055  375707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 02:51:48.801947  375707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 02:51:48.808314  375707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 02:51:48.814860  375707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 02:51:48.821329  375707 kubeadm.go:404] StartCluster: {Name:newest-cni-052502 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-052502 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_
pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 02:51:48.821464  375707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0229 02:51:48.821523  375707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:51:48.862144  375707 cri.go:89] found id: ""
	I0229 02:51:48.862246  375707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 02:51:48.876370  375707 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 02:51:48.876394  375707 kubeadm.go:636] restartCluster start
	I0229 02:51:48.876445  375707 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 02:51:48.889644  375707 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:48.890189  375707 kubeconfig.go:135] verify returned: extract IP: "newest-cni-052502" does not appear in /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:51:48.890433  375707 kubeconfig.go:146] "newest-cni-052502" context is missing from /home/jenkins/minikube-integration/18063-316644/kubeconfig - will repair!
	I0229 02:51:48.890803  375707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:51:48.914805  375707 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 02:51:48.928080  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:48.928138  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:48.942722  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:49.428304  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:49.428391  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:49.443566  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:49.929084  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:49.929189  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:49.943384  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:50.428984  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:50.429073  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:50.443322  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:50.928894  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:50.929013  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:50.944123  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:51.428620  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:51.428719  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:51.443373  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:51.928902  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:51.928999  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:51.942899  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:52.428623  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:52.428714  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:52.442191  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:52.928817  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:52.928917  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:52.942732  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:53.428280  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:53.428382  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:53.441860  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:53.928391  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:53.928510  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:53.941801  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:54.428348  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:54.428440  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:54.442106  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:54.928761  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:54.928863  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:54.943118  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:55.428710  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:55.428791  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:55.442452  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:55.929055  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:55.929151  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:55.942392  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:56.429002  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:56.429082  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:56.442539  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:56.929149  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:56.929289  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:56.942869  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:57.428899  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:57.428980  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:57.443118  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:57.928663  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:57.928764  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:57.942254  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:58.428822  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:58.428920  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:58.442341  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:58.928740  375707 api_server.go:166] Checking apiserver status ...
	I0229 02:51:58.928838  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 02:51:58.943458  375707 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 02:51:58.943489  375707 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 02:51:58.943500  375707 kubeadm.go:1135] stopping kube-system containers ...
	I0229 02:51:58.943512  375707 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0229 02:51:58.943585  375707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0229 02:51:58.982811  375707 cri.go:89] found id: ""
	I0229 02:51:58.982884  375707 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 02:51:59.005036  375707 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 02:51:59.017876  375707 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 02:51:59.017934  375707 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 02:51:59.029407  375707 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 02:51:59.029436  375707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:51:59.166101  375707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:52:00.205427  375707 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.03928421s)
	I0229 02:52:00.205459  375707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:52:00.427576  375707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:52:00.500289  375707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:52:00.587307  375707 api_server.go:52] waiting for apiserver process to appear ...
	I0229 02:52:00.587405  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:52:01.088425  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:52:01.588275  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:52:02.087497  375707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 02:52:02.102683  375707 api_server.go:72] duration metric: took 1.515370863s to wait for apiserver process to appear ...
	I0229 02:52:02.102716  375707 api_server.go:88] waiting for apiserver healthz status ...
	I0229 02:52:02.102738  375707 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0229 02:52:04.337807  375707 api_server.go:279] https://192.168.39.3:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:52:04.337839  375707 api_server.go:103] status: https://192.168.39.3:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:52:04.337857  375707 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0229 02:52:04.392099  375707 api_server.go:279] https://192.168.39.3:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 02:52:04.392131  375707 api_server.go:103] status: https://192.168.39.3:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 02:52:04.603425  375707 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0229 02:52:04.607966  375707 api_server.go:279] https://192.168.39.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:52:04.608011  375707 api_server.go:103] status: https://192.168.39.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:52:05.103366  375707 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0229 02:52:05.108760  375707 api_server.go:279] https://192.168.39.3:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 02:52:05.108799  375707 api_server.go:103] status: https://192.168.39.3:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 02:52:05.603409  375707 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8443/healthz ...
	I0229 02:52:05.608351  375707 api_server.go:279] https://192.168.39.3:8443/healthz returned 200:
	ok
	I0229 02:52:05.616577  375707 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 02:52:05.616614  375707 api_server.go:131] duration metric: took 3.51388911s to wait for apiserver health ...
	I0229 02:52:05.616627  375707 cni.go:84] Creating CNI manager for ""
	I0229 02:52:05.616636  375707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 02:52:05.618474  375707 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 02:52:05.619817  375707 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 02:52:05.636700  375707 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 02:52:05.669183  375707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 02:52:05.700200  375707 system_pods.go:59] 8 kube-system pods found
	I0229 02:52:05.700236  375707 system_pods.go:61] "coredns-76f75df574-2sc4k" [8690e074-b8f7-458c-aec8-d30cef9ef415] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 02:52:05.700243  375707 system_pods.go:61] "etcd-newest-cni-052502" [86a031a7-f973-472d-97c5-bc63e4a134a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 02:52:05.700251  375707 system_pods.go:61] "kube-apiserver-newest-cni-052502" [0df0f95e-f985-45fe-b729-5d78c412e0ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 02:52:05.700257  375707 system_pods.go:61] "kube-controller-manager-newest-cni-052502" [67804aac-1c16-470e-a190-5d7eaabf44d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 02:52:05.700264  375707 system_pods.go:61] "kube-proxy-xgxzs" [39bb915e-fb2d-4761-847b-d3c6ad1e3872] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 02:52:05.700269  375707 system_pods.go:61] "kube-scheduler-newest-cni-052502" [5e107cd4-709c-4acf-bad7-3537428cabef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 02:52:05.700276  375707 system_pods.go:61] "metrics-server-57f55c9bc5-fmxrh" [e33e123b-8d79-4a17-b946-c028d322593c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 02:52:05.700282  375707 system_pods.go:61] "storage-provisioner" [01034756-460d-4664-b370-5668599cea9d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 02:52:05.700292  375707 system_pods.go:74] duration metric: took 31.088556ms to wait for pod list to return data ...
	I0229 02:52:05.700301  375707 node_conditions.go:102] verifying NodePressure condition ...
	I0229 02:52:05.708484  375707 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 02:52:05.708509  375707 node_conditions.go:123] node cpu capacity is 2
	I0229 02:52:05.708520  375707 node_conditions.go:105] duration metric: took 8.21171ms to run NodePressure ...
	I0229 02:52:05.708551  375707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 02:52:06.015052  375707 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 02:52:06.028428  375707 ops.go:34] apiserver oom_adj: -16
	I0229 02:52:06.028454  375707 kubeadm.go:640] restartCluster took 17.152052013s
	I0229 02:52:06.028467  375707 kubeadm.go:406] StartCluster complete in 17.207149211s
	I0229 02:52:06.028491  375707 settings.go:142] acquiring lock: {Name:mk8c3ec6a39254df23a940a266c4c301c1c72782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:52:06.028580  375707 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:52:06.029419  375707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/kubeconfig: {Name:mk1d1178fb4dd482c54bf7fe1f8b3f04815a1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 02:52:06.029650  375707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 02:52:06.029818  375707 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 02:52:06.029930  375707 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-052502"
	I0229 02:52:06.029950  375707 addons.go:69] Setting default-storageclass=true in profile "newest-cni-052502"
	I0229 02:52:06.029956  375707 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-052502"
	I0229 02:52:06.029957  375707 config.go:182] Loaded profile config "newest-cni-052502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0229 02:52:06.029969  375707 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-052502"
	I0229 02:52:06.029978  375707 addons.go:69] Setting dashboard=true in profile "newest-cni-052502"
	I0229 02:52:06.029991  375707 addons.go:234] Setting addon dashboard=true in "newest-cni-052502"
	W0229 02:52:06.029965  375707 addons.go:243] addon storage-provisioner should already be in state true
	W0229 02:52:06.029998  375707 addons.go:243] addon dashboard should already be in state true
	I0229 02:52:06.030002  375707 addons.go:69] Setting metrics-server=true in profile "newest-cni-052502"
	I0229 02:52:06.030039  375707 addons.go:234] Setting addon metrics-server=true in "newest-cni-052502"
	W0229 02:52:06.030048  375707 addons.go:243] addon metrics-server should already be in state true
	I0229 02:52:06.030050  375707 host.go:66] Checking if "newest-cni-052502" exists ...
	I0229 02:52:06.030059  375707 host.go:66] Checking if "newest-cni-052502" exists ...
	I0229 02:52:06.030081  375707 host.go:66] Checking if "newest-cni-052502" exists ...
	I0229 02:52:06.030398  375707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:52:06.030445  375707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:52:06.030482  375707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:52:06.030487  375707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:52:06.030501  375707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:52:06.030523  375707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:52:06.030532  375707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:52:06.030591  375707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:52:06.034423  375707 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-052502" context rescaled to 1 replicas
	I0229 02:52:06.034458  375707 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.3 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0229 02:52:06.037008  375707 out.go:177] * Verifying Kubernetes components...
	I0229 02:52:06.038342  375707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 02:52:06.047705  375707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I0229 02:52:06.047885  375707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44409
	I0229 02:52:06.048295  375707 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:52:06.048418  375707 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:52:06.048915  375707 main.go:141] libmachine: Using API Version  1
	I0229 02:52:06.048938  375707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:52:06.049310  375707 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:52:06.049809  375707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I0229 02:52:06.049840  375707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39869
	I0229 02:52:06.049972  375707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:52:06.050020  375707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:52:06.050250  375707 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:52:06.050250  375707 main.go:141] libmachine: Using API Version  1
	I0229 02:52:06.050328  375707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:52:06.050498  375707 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:52:06.050680  375707 main.go:141] libmachine: Using API Version  1
	I0229 02:52:06.050700  375707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:52:06.050753  375707 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:52:06.051309  375707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:52:06.051343  375707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:52:06.051548  375707 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:52:06.051601  375707 main.go:141] libmachine: Using API Version  1
	I0229 02:52:06.051618  375707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:52:06.051853  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetState
	I0229 02:52:06.051963  375707 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:52:06.052562  375707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:52:06.052621  375707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:52:06.055054  375707 addons.go:234] Setting addon default-storageclass=true in "newest-cni-052502"
	W0229 02:52:06.055074  375707 addons.go:243] addon default-storageclass should already be in state true
	I0229 02:52:06.055101  375707 host.go:66] Checking if "newest-cni-052502" exists ...
	I0229 02:52:06.055513  375707 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 02:52:06.055558  375707 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 02:52:06.071003  375707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0229 02:52:06.071155  375707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36513
	I0229 02:52:06.071520  375707 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:52:06.071637  375707 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:52:06.072021  375707 main.go:141] libmachine: Using API Version  1
	I0229 02:52:06.072041  375707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:52:06.072177  375707 main.go:141] libmachine: Using API Version  1
	I0229 02:52:06.072193  375707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:52:06.072429  375707 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:52:06.072525  375707 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:52:06.072728  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetState
	I0229 02:52:06.072784  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetState
	I0229 02:52:06.074569  375707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45203
	I0229 02:52:06.074744  375707 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:52:06.074917  375707 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:52:06.075006  375707 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:52:06.077014  375707 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 02:52:06.075484  375707 main.go:141] libmachine: Using API Version  1
	I0229 02:52:06.079942  375707 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 02:52:06.079961  375707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 02:52:06.079980  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:52:06.078434  375707 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 02:52:06.078471  375707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:52:06.081364  375707 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 02:52:06.081387  375707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 02:52:06.081405  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHHostname
	I0229 02:52:06.082267  375707 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:52:06.082501  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetState
	I0229 02:52:06.083340  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:52:06.083688  375707 main.go:141] libmachine: (newest-cni-052502) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fc:ef", ip: ""} in network mk-newest-cni-052502: {Iface:virbr2 ExpiryTime:2024-02-29 03:51:34 +0000 UTC Type:0 Mac:52:54:00:19:fc:ef Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:newest-cni-052502 Clientid:01:52:54:00:19:fc:ef}
	I0229 02:52:06.083722  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined IP address 192.168.39.3 and MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:52:06.083863  375707 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32971
	I0229 02:52:06.083933  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:52:06.084596  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHKeyPath
	I0229 02:52:06.084756  375707 main.go:141] libmachine: () Calling .GetVersion
	I0229 02:52:06.085247  375707 main.go:141] libmachine: (newest-cni-052502) Calling .DriverName
	I0229 02:52:06.085411  375707 main.go:141] libmachine: Using API Version  1
	I0229 02:52:06.085445  375707 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 02:52:06.086952  375707 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 02:52:06.085791  375707 main.go:141] libmachine: () Calling .GetMachineName
	I0229 02:52:06.086151  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHUsername
	I0229 02:52:06.087200  375707 main.go:141] libmachine: (newest-cni-052502) DBG | domain newest-cni-052502 has defined MAC address 52:54:00:19:fc:ef in network mk-newest-cni-052502
	I0229 02:52:06.087640  375707 main.go:141] libmachine: (newest-cni-052502) Calling .GetSSHPort
	I0229 02:52:06.089881  375707 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.180603035Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0a66d1b-b468-41d5-910b-a744db6eb4a2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.180796564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02,PodSandboxId:0c48a66d310655ab2f44cf0fba1ed5662cd89fa93594cb4a45127f109c5609bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709174181818261973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b70f8e-1689-4526-a39f-eb8005cbecd2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee800f2,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694,PodSandboxId:48350020b0e2cc4ab209e343d9e15a1d5fdd06f201a07de267e4321a1bd3f5e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709174181924034288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj4sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2741c05-81b2-4de6-8329-f88912d48160,},Annotations:map[string]string{io.kubernetes.container.hash: 9e732771,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f,PodSandboxId:1a33a191dbe670137e358519d3834e0805f639b17a9a0eca4260511d90a80c2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709174180078109598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gr44w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: a74b553f-683a-4e1b-ac48-b4553d00b306,},Annotations:map[string]string{io.kubernetes.container.hash: ec9d29f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861,PodSandboxId:f6414d4bee4631d262ba32af82ea34f65134b75fe5f17d498b5119a6ef282f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709174159921499699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39250f8b415d6b029a5f
20f6b03dea1,},Annotations:map[string]string{io.kubernetes.container.hash: 716a6c18,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2,PodSandboxId:b53822c7895d82ab99052b40e726b36e52b2b7ec65f4ca2884055d4f5c2eec67,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709174159899484093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79083cc52dd4c23bb4518dc
44bebac51,},Annotations:map[string]string{io.kubernetes.container.hash: 92573f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9,PodSandboxId:40615d7a1d3d9dc3f0603d3d2355c82e26433a92959d54f111a82e2049cdabd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709174159830650625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4bbfe260589851d71a917f7ab33efd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349,PodSandboxId:330b39a8726b0e3e8f2afacbb2e6d86b892fdb221c43c3052ee63edee8cd8125,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709174159820853035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9b278231d08a8a1a33579d6513f231fd,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0a66d1b-b468-41d5-910b-a744db6eb4a2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.187056261Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=17e67bcb-589e-449a-9bb2-debbee36b530 name=/runtime.v1.RuntimeService/Status
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.187405295Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=17e67bcb-589e-449a-9bb2-debbee36b530 name=/runtime.v1.RuntimeService/Status
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.228374935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=297f9504-6ee3-4cff-8567-a219b9eb27ed name=/runtime.v1.RuntimeService/Version
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.228480431Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=297f9504-6ee3-4cff-8567-a219b9eb27ed name=/runtime.v1.RuntimeService/Version
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.230136555Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d09a238-8c8c-4356-b5b9-76ac1c4a77d8 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.231365064Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175127231338881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d09a238-8c8c-4356-b5b9-76ac1c4a77d8 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.232225439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=714f8f42-a3a3-47e1-be69-7a6a949fe1fb name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.232306752Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=714f8f42-a3a3-47e1-be69-7a6a949fe1fb name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.232480492Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02,PodSandboxId:0c48a66d310655ab2f44cf0fba1ed5662cd89fa93594cb4a45127f109c5609bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709174181818261973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b70f8e-1689-4526-a39f-eb8005cbecd2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee800f2,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694,PodSandboxId:48350020b0e2cc4ab209e343d9e15a1d5fdd06f201a07de267e4321a1bd3f5e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709174181924034288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj4sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2741c05-81b2-4de6-8329-f88912d48160,},Annotations:map[string]string{io.kubernetes.container.hash: 9e732771,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f,PodSandboxId:1a33a191dbe670137e358519d3834e0805f639b17a9a0eca4260511d90a80c2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709174180078109598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gr44w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: a74b553f-683a-4e1b-ac48-b4553d00b306,},Annotations:map[string]string{io.kubernetes.container.hash: ec9d29f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861,PodSandboxId:f6414d4bee4631d262ba32af82ea34f65134b75fe5f17d498b5119a6ef282f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709174159921499699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39250f8b415d6b029a5f
20f6b03dea1,},Annotations:map[string]string{io.kubernetes.container.hash: 716a6c18,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2,PodSandboxId:b53822c7895d82ab99052b40e726b36e52b2b7ec65f4ca2884055d4f5c2eec67,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709174159899484093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79083cc52dd4c23bb4518dc
44bebac51,},Annotations:map[string]string{io.kubernetes.container.hash: 92573f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9,PodSandboxId:40615d7a1d3d9dc3f0603d3d2355c82e26433a92959d54f111a82e2049cdabd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709174159830650625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4bbfe260589851d71a917f7ab33efd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349,PodSandboxId:330b39a8726b0e3e8f2afacbb2e6d86b892fdb221c43c3052ee63edee8cd8125,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709174159820853035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9b278231d08a8a1a33579d6513f231fd,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=714f8f42-a3a3-47e1-be69-7a6a949fe1fb name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.275655530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f818410a-5fa0-401f-9cbc-d604bb484f7b name=/runtime.v1.RuntimeService/Version
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.275753615Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f818410a-5fa0-401f-9cbc-d604bb484f7b name=/runtime.v1.RuntimeService/Version
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.277508058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f54f5b0-a4fb-4c3b-9269-c660bb51d02f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.278167292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175127278139852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f54f5b0-a4fb-4c3b-9269-c660bb51d02f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.278916083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca9ffcc3-8ba5-4d95-a48b-be392fb5803e name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.278995143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca9ffcc3-8ba5-4d95-a48b-be392fb5803e name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.279270658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02,PodSandboxId:0c48a66d310655ab2f44cf0fba1ed5662cd89fa93594cb4a45127f109c5609bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709174181818261973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b70f8e-1689-4526-a39f-eb8005cbecd2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee800f2,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694,PodSandboxId:48350020b0e2cc4ab209e343d9e15a1d5fdd06f201a07de267e4321a1bd3f5e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709174181924034288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj4sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2741c05-81b2-4de6-8329-f88912d48160,},Annotations:map[string]string{io.kubernetes.container.hash: 9e732771,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f,PodSandboxId:1a33a191dbe670137e358519d3834e0805f639b17a9a0eca4260511d90a80c2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709174180078109598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gr44w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: a74b553f-683a-4e1b-ac48-b4553d00b306,},Annotations:map[string]string{io.kubernetes.container.hash: ec9d29f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861,PodSandboxId:f6414d4bee4631d262ba32af82ea34f65134b75fe5f17d498b5119a6ef282f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709174159921499699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39250f8b415d6b029a5f
20f6b03dea1,},Annotations:map[string]string{io.kubernetes.container.hash: 716a6c18,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2,PodSandboxId:b53822c7895d82ab99052b40e726b36e52b2b7ec65f4ca2884055d4f5c2eec67,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709174159899484093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79083cc52dd4c23bb4518dc
44bebac51,},Annotations:map[string]string{io.kubernetes.container.hash: 92573f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9,PodSandboxId:40615d7a1d3d9dc3f0603d3d2355c82e26433a92959d54f111a82e2049cdabd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709174159830650625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4bbfe260589851d71a917f7ab33efd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349,PodSandboxId:330b39a8726b0e3e8f2afacbb2e6d86b892fdb221c43c3052ee63edee8cd8125,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709174159820853035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9b278231d08a8a1a33579d6513f231fd,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca9ffcc3-8ba5-4d95-a48b-be392fb5803e name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.318948652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b4525a7-31cf-4e2b-8e39-50deb7779275 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.319048639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b4525a7-31cf-4e2b-8e39-50deb7779275 name=/runtime.v1.RuntimeService/Version
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.320507594Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7711864f-9aae-4e3e-a54d-1217afa876ac name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.321039692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1709175127321018260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7711864f-9aae-4e3e-a54d-1217afa876ac name=/runtime.v1.ImageService/ImageFsInfo
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.322168939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b81287b6-d795-4552-ad3a-efd8b99f5cf2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.322289968Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b81287b6-d795-4552-ad3a-efd8b99f5cf2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 29 02:52:07 default-k8s-diff-port-071485 crio[674]: time="2024-02-29 02:52:07.322463343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02,PodSandboxId:0c48a66d310655ab2f44cf0fba1ed5662cd89fa93594cb4a45127f109c5609bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1709174181818261973,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7b70f8e-1689-4526-a39f-eb8005cbecd2,},Annotations:map[string]string{io.kubernetes.container.hash: 8ee800f2,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694,PodSandboxId:48350020b0e2cc4ab209e343d9e15a1d5fdd06f201a07de267e4321a1bd3f5e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1709174181924034288,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xj4sh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2741c05-81b2-4de6-8329-f88912d48160,},Annotations:map[string]string{io.kubernetes.container.hash: 9e732771,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f,PodSandboxId:1a33a191dbe670137e358519d3834e0805f639b17a9a0eca4260511d90a80c2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1709174180078109598,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gr44w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: a74b553f-683a-4e1b-ac48-b4553d00b306,},Annotations:map[string]string{io.kubernetes.container.hash: ec9d29f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861,PodSandboxId:f6414d4bee4631d262ba32af82ea34f65134b75fe5f17d498b5119a6ef282f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1709174159921499699,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39250f8b415d6b029a5f
20f6b03dea1,},Annotations:map[string]string{io.kubernetes.container.hash: 716a6c18,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2,PodSandboxId:b53822c7895d82ab99052b40e726b36e52b2b7ec65f4ca2884055d4f5c2eec67,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1709174159899484093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79083cc52dd4c23bb4518dc
44bebac51,},Annotations:map[string]string{io.kubernetes.container.hash: 92573f54,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9,PodSandboxId:40615d7a1d3d9dc3f0603d3d2355c82e26433a92959d54f111a82e2049cdabd6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1709174159830650625,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4bbfe260589851d71a917f7ab33efd9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349,PodSandboxId:330b39a8726b0e3e8f2afacbb2e6d86b892fdb221c43c3052ee63edee8cd8125,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1709174159820853035,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-071485,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9b278231d08a8a1a33579d6513f231fd,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b81287b6-d795-4552-ad3a-efd8b99f5cf2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	450ceac543af8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   48350020b0e2c       coredns-5dd5756b68-xj4sh
	01b4801ac4a5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   0c48a66d31065       storage-provisioner
	44fe677f15041       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   1a33a191dbe67       kube-proxy-gr44w
	da1b959c6cfcf       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   f6414d4bee463       etcd-default-k8s-diff-port-071485
	f33d63f6603f7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   16 minutes ago      Running             kube-apiserver            2                   b53822c7895d8       kube-apiserver-default-k8s-diff-port-071485
	817abd6ec8c85       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   16 minutes ago      Running             kube-controller-manager   2                   40615d7a1d3d9       kube-controller-manager-default-k8s-diff-port-071485
	15b0755a43227       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   16 minutes ago      Running             kube-scheduler            2                   330b39a8726b0       kube-scheduler-default-k8s-diff-port-071485
	
	
	==> coredns [450ceac543af80e2442c3c6b77d2b271d6ad3a0d2690633e988046f911f71694] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54411 - 24465 "HINFO IN 7655657684021901365.2426359110297695895. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012429203s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-071485
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-071485
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61
	                    minikube.k8s.io/name=default-k8s-diff-port-071485
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T02_36_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 02:36:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-071485
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 02:52:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 02:51:45 +0000   Thu, 29 Feb 2024 02:36:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 02:51:45 +0000   Thu, 29 Feb 2024 02:36:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 02:51:45 +0000   Thu, 29 Feb 2024 02:36:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 02:51:45 +0000   Thu, 29 Feb 2024 02:36:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.233
	  Hostname:    default-k8s-diff-port-071485
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 caf3cd82fa1947558241624e74122209
	  System UUID:                caf3cd82-fa19-4755-8241-624e74122209
	  Boot ID:                    cd093dea-45bb-4a34-bcff-e5ce0ba51ed6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-xj4sh                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-default-k8s-diff-port-071485                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-071485             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-071485    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-gr44w                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-default-k8s-diff-port-071485             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-fpwzl                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-071485 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-071485 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-071485 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m   node-controller  Node default-k8s-diff-port-071485 event: Registered Node default-k8s-diff-port-071485 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053900] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044548] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.614521] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.472404] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.776477] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.106254] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.061046] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075778] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.207667] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.144426] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.273225] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[Feb29 02:31] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.062327] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.681483] kauditd_printk_skb: 72 callbacks suppressed
	[  +8.113446] kauditd_printk_skb: 69 callbacks suppressed
	[ +22.882961] kauditd_printk_skb: 1 callbacks suppressed
	[Feb29 02:35] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.740882] systemd-fstab-generator[3393]: Ignoring "noauto" option for root device
	[Feb29 02:36] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.165590] systemd-fstab-generator[3718]: Ignoring "noauto" option for root device
	[ +13.611093] kauditd_printk_skb: 14 callbacks suppressed
	[Feb29 02:37] kauditd_printk_skb: 45 callbacks suppressed
	
	
	==> etcd [da1b959c6cfcf43a2a839c3ab19ca7057a86ee604113ba229d9c1685cc1e4861] <==
	{"level":"info","ts":"2024-02-29T02:36:00.5372Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.233:2379"}
	{"level":"info","ts":"2024-02-29T02:36:00.541011Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb00245d0a15f92c","local-member-id":"4caceb90632e0222","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:36:00.542293Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:36:00.54562Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T02:36:00.543643Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T02:36:00.545693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T02:36:00.542169Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T02:46:01.416105Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":668}
	{"level":"info","ts":"2024-02-29T02:46:01.418437Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":668,"took":"1.981671ms","hash":1578095446}
	{"level":"info","ts":"2024-02-29T02:46:01.418488Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1578095446,"revision":668,"compact-revision":-1}
	{"level":"info","ts":"2024-02-29T02:50:27.173179Z","caller":"traceutil/trace.go:171","msg":"trace[1119556615] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"149.067224ms","start":"2024-02-29T02:50:27.024065Z","end":"2024-02-29T02:50:27.173133Z","steps":["trace[1119556615] 'process raft request'  (duration: 148.967339ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T02:50:43.403366Z","caller":"traceutil/trace.go:171","msg":"trace[2110839826] transaction","detail":"{read_only:false; response_revision:1140; number_of_response:1; }","duration":"124.824518ms","start":"2024-02-29T02:50:43.278521Z","end":"2024-02-29T02:50:43.403345Z","steps":["trace[2110839826] 'process raft request'  (duration: 124.67132ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T02:50:44.84346Z","caller":"traceutil/trace.go:171","msg":"trace[453582263] transaction","detail":"{read_only:false; response_revision:1141; number_of_response:1; }","duration":"198.060612ms","start":"2024-02-29T02:50:44.645384Z","end":"2024-02-29T02:50:44.843445Z","steps":["trace[453582263] 'process raft request'  (duration: 197.964144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T02:50:45.089915Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.20144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.233\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-02-29T02:50:45.090105Z","caller":"traceutil/trace.go:171","msg":"trace[20335622] range","detail":"{range_begin:/registry/masterleases/192.168.61.233; range_end:; response_count:1; response_revision:1141; }","duration":"127.476718ms","start":"2024-02-29T02:50:44.962605Z","end":"2024-02-29T02:50:45.090082Z","steps":["trace[20335622] 'range keys from in-memory index tree'  (duration: 127.081165ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T02:50:45.089911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.817567ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T02:50:45.091083Z","caller":"traceutil/trace.go:171","msg":"trace[1166176505] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1141; }","duration":"114.113463ms","start":"2024-02-29T02:50:44.976955Z","end":"2024-02-29T02:50:45.091068Z","steps":["trace[1166176505] 'range keys from in-memory index tree'  (duration: 112.732171ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T02:51:01.427608Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":911}
	{"level":"info","ts":"2024-02-29T02:51:01.429304Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":911,"took":"1.366801ms","hash":1330244355}
	{"level":"info","ts":"2024-02-29T02:51:01.429334Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1330244355,"revision":911,"compact-revision":668}
	{"level":"warn","ts":"2024-02-29T02:51:50.134384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.976043ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T02:51:50.134579Z","caller":"traceutil/trace.go:171","msg":"trace[479212839] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1196; }","duration":"160.125881ms","start":"2024-02-29T02:51:49.974368Z","end":"2024-02-29T02:51:50.134494Z","steps":["trace[479212839] 'range keys from in-memory index tree'  (duration: 159.90201ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T02:51:50.383938Z","caller":"traceutil/trace.go:171","msg":"trace[1965308315] linearizableReadLoop","detail":"{readStateIndex:1401; appliedIndex:1400; }","duration":"159.817306ms","start":"2024-02-29T02:51:50.224104Z","end":"2024-02-29T02:51:50.383921Z","steps":["trace[1965308315] 'read index received'  (duration: 159.59268ms)","trace[1965308315] 'applied index is now lower than readState.Index'  (duration: 223.621µs)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T02:51:50.384095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.976534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-02-29T02:51:50.384173Z","caller":"traceutil/trace.go:171","msg":"trace[2127842905] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1196; }","duration":"160.08132ms","start":"2024-02-29T02:51:50.224079Z","end":"2024-02-29T02:51:50.384161Z","steps":["trace[2127842905] 'agreement among raft nodes before linearized reading'  (duration: 159.933992ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:52:07 up 21 min,  0 users,  load average: 0.07, 0.09, 0.15
	Linux default-k8s-diff-port-071485 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f33d63f6603f71b72180bd008e128fdb1db7592a39749024b0d2d249504aa7c2] <==
	I0229 02:49:04.195928       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:49:04.197153       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:49:04.197214       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:49:04.197222       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 02:50:03.077355       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 02:51:03.077737       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 02:51:03.199704       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:51:03.199829       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:51:03.200382       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 02:51:04.200334       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:51:04.200400       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:51:04.200409       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:51:04.200505       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:51:04.200682       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:51:04.201929       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0229 02:52:03.077230       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0229 02:52:04.200877       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:52:04.200949       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0229 02:52:04.200957       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0229 02:52:04.203159       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 02:52:04.203319       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0229 02:52:04.203486       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [817abd6ec8c852a40653b2c3172660cc2d7ad53df5852e11ef50a72f83cb9ac9] <==
	I0229 02:46:19.507092       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:46:48.986614       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:46:49.518971       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:47:18.994503       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:47:19.528679       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0229 02:47:23.821952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="406.264µs"
	I0229 02:47:36.819799       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="1.011358ms"
	E0229 02:47:49.001136       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:47:49.538064       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:48:19.008023       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:48:19.546874       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:48:49.013846       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:48:49.556734       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:49:19.022314       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:49:19.566271       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:49:49.029901       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:49:49.575877       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:50:19.038101       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:50:19.586225       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:50:49.044856       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:50:49.595190       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:51:19.049941       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:51:19.605092       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0229 02:51:49.055897       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0229 02:51:49.613807       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [44fe677f15041d746ac6d61dfa28fac59c9dcd220b8b14060779fda6fe08f12f] <==
	I0229 02:36:20.717822       1 server_others.go:69] "Using iptables proxy"
	I0229 02:36:20.764509       1 node.go:141] Successfully retrieved node IP: 192.168.61.233
	I0229 02:36:20.863431       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 02:36:20.863506       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 02:36:20.869927       1 server_others.go:152] "Using iptables Proxier"
	I0229 02:36:20.871001       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 02:36:20.871351       1 server.go:846] "Version info" version="v1.28.4"
	I0229 02:36:20.871399       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 02:36:20.874077       1 config.go:188] "Starting service config controller"
	I0229 02:36:20.877917       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 02:36:20.877988       1 config.go:97] "Starting endpoint slice config controller"
	I0229 02:36:20.877995       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 02:36:20.879324       1 config.go:315] "Starting node config controller"
	I0229 02:36:20.879359       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 02:36:20.979473       1 shared_informer.go:318] Caches are synced for node config
	I0229 02:36:20.979666       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 02:36:20.979673       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [15b0755a432276a801158c3dac15c6730ed4d680fbf1b94f113f6d0cfbbef349] <==
	W0229 02:36:03.256406       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 02:36:03.256459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 02:36:03.256612       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 02:36:03.256683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 02:36:03.256793       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 02:36:03.257010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 02:36:03.257123       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 02:36:03.257212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 02:36:04.125141       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 02:36:04.125247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 02:36:04.175213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 02:36:04.176085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 02:36:04.198276       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 02:36:04.198353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 02:36:04.244700       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 02:36:04.244956       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 02:36:04.244730       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 02:36:04.245170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 02:36:04.371975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 02:36:04.372400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 02:36:04.438852       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 02:36:04.439123       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 02:36:04.451147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 02:36:04.451256       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0229 02:36:04.831765       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 02:50:06 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:50:06.851593    3725 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:50:06 default-k8s-diff-port-071485 kubelet[3725]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:50:06 default-k8s-diff-port-071485 kubelet[3725]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:50:06 default-k8s-diff-port-071485 kubelet[3725]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:50:06 default-k8s-diff-port-071485 kubelet[3725]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:50:12 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:50:12.803384    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:50:26 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:50:26.803232    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:50:37 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:50:37.802655    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:50:49 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:50:49.805355    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:51:02 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:51:02.803192    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:51:06 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:51:06.857651    3725 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:51:06 default-k8s-diff-port-071485 kubelet[3725]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:51:06 default-k8s-diff-port-071485 kubelet[3725]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:51:06 default-k8s-diff-port-071485 kubelet[3725]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:51:06 default-k8s-diff-port-071485 kubelet[3725]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 02:51:15 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:51:15.802635    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:51:27 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:51:27.802291    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:51:38 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:51:38.803323    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:51:49 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:51:49.802974    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:52:04 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:52:04.805594    3725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fpwzl" podUID="5215d27e-4bf2-4331-89f2-24096dc96b90"
	Feb 29 02:52:06 default-k8s-diff-port-071485 kubelet[3725]: E0229 02:52:06.856039    3725 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 02:52:06 default-k8s-diff-port-071485 kubelet[3725]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 02:52:06 default-k8s-diff-port-071485 kubelet[3725]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 02:52:06 default-k8s-diff-port-071485 kubelet[3725]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 02:52:06 default-k8s-diff-port-071485 kubelet[3725]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [01b4801ac4a5de443b13629edf5e50e742e413ff4a2cd1e5f15b26c15f2d2e02] <==
	I0229 02:36:22.114406       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 02:36:22.128623       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 02:36:22.128719       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 02:36:22.149465       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 02:36:22.149866       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-071485_dd8f6f8e-7f5f-4044-a47a-0ebc4a263fbb!
	I0229 02:36:22.149999       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e29c156f-0443-4041-ad13-643b9c57e32c", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-071485_dd8f6f8e-7f5f-4044-a47a-0ebc4a263fbb became leader
	I0229 02:36:22.252426       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-071485_dd8f6f8e-7f5f-4044-a47a-0ebc4a263fbb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071485 -n default-k8s-diff-port-071485
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-071485 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-fpwzl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-071485 describe pod metrics-server-57f55c9bc5-fpwzl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-071485 describe pod metrics-server-57f55c9bc5-fpwzl: exit status 1 (83.048759ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-fpwzl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-071485 describe pod metrics-server-57f55c9bc5-fpwzl: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (125.58s)

                                                
                                    

Test pass (240/309)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 40.95
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
9 TestDownloadOnly/v1.16.0/DeleteAll 0.15
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 21.9
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.15
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 13.17
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.58
31 TestOffline 101.52
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 157.31
38 TestAddons/parallel/Registry 18.31
40 TestAddons/parallel/InspektorGadget 12.11
41 TestAddons/parallel/MetricsServer 6.79
42 TestAddons/parallel/HelmTiller 14.11
44 TestAddons/parallel/CSI 79.54
45 TestAddons/parallel/Headlamp 16.68
46 TestAddons/parallel/CloudSpanner 6.8
47 TestAddons/parallel/LocalPath 55.71
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestCertOptions 96.49
55 TestCertExpiration 277.31
57 TestForceSystemdFlag 74.92
58 TestForceSystemdEnv 76.18
60 TestKVMDriverInstallOrUpdate 4.37
64 TestErrorSpam/setup 44.86
65 TestErrorSpam/start 0.38
66 TestErrorSpam/status 0.78
67 TestErrorSpam/pause 1.68
68 TestErrorSpam/unpause 1.78
69 TestErrorSpam/stop 2.26
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 86.55
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 29.36
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.99
81 TestFunctional/serial/CacheCmd/cache/add_local 2.25
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 33.07
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.51
92 TestFunctional/serial/LogsFileCmd 1.56
93 TestFunctional/serial/InvalidService 5.56
95 TestFunctional/parallel/ConfigCmd 0.4
96 TestFunctional/parallel/DashboardCmd 15.1
97 TestFunctional/parallel/DryRun 0.32
98 TestFunctional/parallel/InternationalLanguage 0.17
99 TestFunctional/parallel/StatusCmd 1.23
103 TestFunctional/parallel/ServiceCmdConnect 7.51
104 TestFunctional/parallel/AddonsCmd 0.45
105 TestFunctional/parallel/PersistentVolumeClaim 54.46
107 TestFunctional/parallel/SSHCmd 0.53
108 TestFunctional/parallel/CpCmd 1.55
109 TestFunctional/parallel/MySQL 33.43
110 TestFunctional/parallel/FileSync 0.25
111 TestFunctional/parallel/CertSync 1.28
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
119 TestFunctional/parallel/License 0.6
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
122 TestFunctional/parallel/ProfileCmd/profile_list 0.31
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
124 TestFunctional/parallel/MountCmd/any-port 9.81
125 TestFunctional/parallel/MountCmd/specific-port 1.86
126 TestFunctional/parallel/ServiceCmd/List 0.57
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
129 TestFunctional/parallel/ServiceCmd/Format 0.47
130 TestFunctional/parallel/MountCmd/VerifyCleanup 1.67
131 TestFunctional/parallel/ServiceCmd/URL 0.43
132 TestFunctional/parallel/Version/short 0.07
133 TestFunctional/parallel/Version/components 0.68
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
138 TestFunctional/parallel/ImageCommands/ImageBuild 3.81
139 TestFunctional/parallel/ImageCommands/Setup 2.1
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.34
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.28
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.77
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 15.35
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.41
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.19
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.31
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.02
172 TestJSONOutput/start/Command 98.73
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.78
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.69
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.11
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.21
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 94.32
204 TestMountStart/serial/StartWithMountFirst 28.86
205 TestMountStart/serial/VerifyMountFirst 0.41
206 TestMountStart/serial/StartWithMountSecond 25.98
207 TestMountStart/serial/VerifyMountSecond 0.38
208 TestMountStart/serial/DeleteFirst 0.69
209 TestMountStart/serial/VerifyMountPostDelete 0.39
210 TestMountStart/serial/Stop 1.25
214 TestMultiNode/serial/FreshStart2Nodes 107.04
215 TestMultiNode/serial/DeployApp2Nodes 5.6
216 TestMultiNode/serial/PingHostFrom2Pods 0.91
217 TestMultiNode/serial/AddNode 41.79
218 TestMultiNode/serial/MultiNodeLabels 0.07
219 TestMultiNode/serial/ProfileList 0.21
220 TestMultiNode/serial/CopyFile 7.56
221 TestMultiNode/serial/StopNode 2.32
222 TestMultiNode/serial/StartAfterStop 28.42
224 TestMultiNode/serial/DeleteNode 1.59
226 TestMultiNode/serial/RestartMultiNode 447.38
227 TestMultiNode/serial/ValidateNameConflict 49.76
234 TestScheduledStopUnix 116.74
238 TestRunningBinaryUpgrade 215.49
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
247 TestNoKubernetes/serial/StartWithK8s 98.4
252 TestNetworkPlugins/group/false 3.34
256 TestNoKubernetes/serial/StartWithStopK8s 45.91
257 TestStoppedBinaryUpgrade/Setup 3
258 TestStoppedBinaryUpgrade/Upgrade 139.97
259 TestNoKubernetes/serial/Start 52.05
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
261 TestNoKubernetes/serial/ProfileList 31.62
262 TestNoKubernetes/serial/Stop 1.57
263 TestNoKubernetes/serial/StartNoArgs 22.97
264 TestStoppedBinaryUpgrade/MinikubeLogs 0.95
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
274 TestPause/serial/Start 106.44
275 TestNetworkPlugins/group/auto/Start 101.48
276 TestPause/serial/SecondStartNoReconfiguration 36.54
277 TestNetworkPlugins/group/kindnet/Start 69.48
278 TestNetworkPlugins/group/auto/KubeletFlags 0.24
279 TestNetworkPlugins/group/auto/NetCatPod 11.25
280 TestNetworkPlugins/group/auto/DNS 0.19
281 TestNetworkPlugins/group/auto/Localhost 0.16
282 TestNetworkPlugins/group/auto/HairPin 0.15
283 TestPause/serial/Pause 0.85
284 TestPause/serial/VerifyStatus 0.26
285 TestPause/serial/Unpause 0.71
286 TestPause/serial/PauseAgain 0.94
287 TestPause/serial/DeletePaused 0.98
288 TestPause/serial/VerifyDeletedResources 12.4
289 TestNetworkPlugins/group/calico/Start 107.6
290 TestNetworkPlugins/group/custom-flannel/Start 122.56
291 TestNetworkPlugins/group/enable-default-cni/Start 142.29
292 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
293 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
294 TestNetworkPlugins/group/kindnet/NetCatPod 13.27
295 TestNetworkPlugins/group/kindnet/DNS 0.19
296 TestNetworkPlugins/group/kindnet/Localhost 0.15
297 TestNetworkPlugins/group/kindnet/HairPin 0.15
298 TestNetworkPlugins/group/flannel/Start 98.35
299 TestNetworkPlugins/group/calico/ControllerPod 6.01
300 TestNetworkPlugins/group/calico/KubeletFlags 0.23
301 TestNetworkPlugins/group/calico/NetCatPod 12.52
302 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
303 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.08
304 TestNetworkPlugins/group/calico/DNS 0.35
305 TestNetworkPlugins/group/calico/Localhost 0.17
306 TestNetworkPlugins/group/calico/HairPin 0.17
307 TestNetworkPlugins/group/custom-flannel/DNS 0.19
308 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
309 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
310 TestNetworkPlugins/group/bridge/Start 100.43
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.28
315 TestNetworkPlugins/group/flannel/ControllerPod 6.01
316 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
319 TestNetworkPlugins/group/flannel/NetCatPod 12.35
320 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
321 TestNetworkPlugins/group/flannel/DNS 0.22
322 TestNetworkPlugins/group/flannel/Localhost 0.16
323 TestNetworkPlugins/group/flannel/HairPin 0.17
325 TestStartStop/group/no-preload/serial/FirstStart 125.37
327 TestStartStop/group/embed-certs/serial/FirstStart 105.12
328 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
329 TestNetworkPlugins/group/bridge/NetCatPod 11.32
330 TestNetworkPlugins/group/bridge/DNS 0.21
331 TestNetworkPlugins/group/bridge/Localhost 0.16
332 TestNetworkPlugins/group/bridge/HairPin 0.16
334 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 97.69
335 TestStartStop/group/embed-certs/serial/DeployApp 10.34
336 TestStartStop/group/no-preload/serial/DeployApp 10.32
337 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.28
339 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
341 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
348 TestStartStop/group/embed-certs/serial/SecondStart 653.68
349 TestStartStop/group/no-preload/serial/SecondStart 596.7
351 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 873.25
352 TestStartStop/group/old-k8s-version/serial/Stop 1.25
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
364 TestStartStop/group/newest-cni/serial/FirstStart 56.44
365 TestStartStop/group/newest-cni/serial/DeployApp 0
366 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.48
367 TestStartStop/group/newest-cni/serial/Stop 11.13
368 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
369 TestStartStop/group/newest-cni/serial/SecondStart 46.84
370 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
371 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
373 TestStartStop/group/newest-cni/serial/Pause 2.82
x
+
TestDownloadOnly/v1.16.0/json-events (40.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-425270 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-425270 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (40.950433432s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (40.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-425270
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-425270: exit status 85 (74.878817ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-425270 | jenkins | v1.32.0 | 29 Feb 24 01:10 UTC |          |
	|         | -p download-only-425270        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:10:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:10:42.409676  323897 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:10:42.409804  323897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:10:42.409815  323897 out.go:304] Setting ErrFile to fd 2...
	I0229 01:10:42.409820  323897 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:10:42.410018  323897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	W0229 01:10:42.410180  323897 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18063-316644/.minikube/config/config.json: open /home/jenkins/minikube-integration/18063-316644/.minikube/config/config.json: no such file or directory
	I0229 01:10:42.410770  323897 out.go:298] Setting JSON to true
	I0229 01:10:42.411920  323897 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3186,"bootTime":1709165857,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:10:42.411996  323897 start.go:139] virtualization: kvm guest
	I0229 01:10:42.414328  323897 out.go:97] [download-only-425270] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	W0229 01:10:42.414488  323897 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball: no such file or directory
	I0229 01:10:42.414540  323897 notify.go:220] Checking for updates...
	I0229 01:10:42.415897  323897 out.go:169] MINIKUBE_LOCATION=18063
	I0229 01:10:42.417541  323897 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:10:42.419020  323897 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:10:42.420324  323897 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:10:42.421676  323897 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 01:10:42.423991  323897 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 01:10:42.424273  323897 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:10:42.455554  323897 out.go:97] Using the kvm2 driver based on user configuration
	I0229 01:10:42.455598  323897 start.go:299] selected driver: kvm2
	I0229 01:10:42.455605  323897 start.go:903] validating driver "kvm2" against <nil>
	I0229 01:10:42.455935  323897 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:10:42.456012  323897 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:10:42.472226  323897 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:10:42.472275  323897 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:10:42.472749  323897 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 01:10:42.472893  323897 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 01:10:42.472969  323897 cni.go:84] Creating CNI manager for ""
	I0229 01:10:42.472983  323897 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 01:10:42.472994  323897 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 01:10:42.473000  323897 start_flags.go:323] config:
	{Name:download-only-425270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-425270 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:10:42.473192  323897 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:10:42.474950  323897 out.go:97] Downloading VM boot image ...
	I0229 01:10:42.474978  323897 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18063-316644/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 01:10:52.061595  323897 out.go:97] Starting control plane node download-only-425270 in cluster download-only-425270
	I0229 01:10:52.061625  323897 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 01:10:52.173564  323897 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 01:10:52.173598  323897 cache.go:56] Caching tarball of preloaded images
	I0229 01:10:52.173772  323897 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 01:10:52.175449  323897 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0229 01:10:52.175466  323897 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0229 01:10:52.288570  323897 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0229 01:11:06.396792  323897 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0229 01:11:06.396897  323897 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0229 01:11:07.352372  323897 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0229 01:11:07.352721  323897 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/download-only-425270/config.json ...
	I0229 01:11:07.352753  323897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/download-only-425270/config.json: {Name:mkcc3d8ea8166a56274e793c3cb3fe65666375d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:11:07.352941  323897 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0229 01:11:07.353089  323897 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/18063-316644/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-425270"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-425270
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (21.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-057025 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-057025 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (21.895779972s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (21.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-057025
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-057025: exit status 85 (75.342373ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-425270 | jenkins | v1.32.0 | 29 Feb 24 01:10 UTC |                     |
	|         | -p download-only-425270        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-425270        | download-only-425270 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| start   | -o=json --download-only        | download-only-057025 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC |                     |
	|         | -p download-only-057025        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:11:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:11:23.721675  324142 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:11:23.721930  324142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:11:23.721941  324142 out.go:304] Setting ErrFile to fd 2...
	I0229 01:11:23.721945  324142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:11:23.722137  324142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 01:11:23.722763  324142 out.go:298] Setting JSON to true
	I0229 01:11:23.723721  324142 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3227,"bootTime":1709165857,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:11:23.723796  324142 start.go:139] virtualization: kvm guest
	I0229 01:11:23.725970  324142 out.go:97] [download-only-057025] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:11:23.727714  324142 out.go:169] MINIKUBE_LOCATION=18063
	I0229 01:11:23.726120  324142 notify.go:220] Checking for updates...
	I0229 01:11:23.730429  324142 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:11:23.732160  324142 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:11:23.733702  324142 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:11:23.735101  324142 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 01:11:23.738172  324142 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 01:11:23.738433  324142 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:11:23.770176  324142 out.go:97] Using the kvm2 driver based on user configuration
	I0229 01:11:23.770211  324142 start.go:299] selected driver: kvm2
	I0229 01:11:23.770217  324142 start.go:903] validating driver "kvm2" against <nil>
	I0229 01:11:23.770668  324142 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:11:23.770810  324142 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:11:23.785887  324142 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:11:23.785969  324142 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:11:23.786656  324142 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 01:11:23.786836  324142 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 01:11:23.786936  324142 cni.go:84] Creating CNI manager for ""
	I0229 01:11:23.786954  324142 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 01:11:23.786967  324142 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 01:11:23.786981  324142 start_flags.go:323] config:
	{Name:download-only-057025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-057025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:11:23.787166  324142 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:11:23.788944  324142 out.go:97] Starting control plane node download-only-057025 in cluster download-only-057025
	I0229 01:11:23.788959  324142 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 01:11:23.898483  324142 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 01:11:23.898519  324142 cache.go:56] Caching tarball of preloaded images
	I0229 01:11:23.898702  324142 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 01:11:23.900624  324142 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0229 01:11:23.900644  324142 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0229 01:11:24.009936  324142 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0229 01:11:36.474141  324142 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0229 01:11:36.474266  324142 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0229 01:11:37.343170  324142 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0229 01:11:37.343579  324142 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/download-only-057025/config.json ...
	I0229 01:11:37.343616  324142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/download-only-057025/config.json: {Name:mk877edec73ec792e8d23ea6ab75a4ea207eb8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:11:37.343778  324142 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0229 01:11:37.343919  324142 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18063-316644/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-057025"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-057025
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (13.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-561532 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-561532 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.17409549s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (13.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-561532
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-561532: exit status 85 (73.839032ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-425270 | jenkins | v1.32.0 | 29 Feb 24 01:10 UTC |                     |
	|         | -p download-only-425270           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-425270           | download-only-425270 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| start   | -o=json --download-only           | download-only-057025 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC |                     |
	|         | -p download-only-057025           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| delete  | -p download-only-057025           | download-only-057025 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC | 29 Feb 24 01:11 UTC |
	| start   | -o=json --download-only           | download-only-561532 | jenkins | v1.32.0 | 29 Feb 24 01:11 UTC |                     |
	|         | -p download-only-561532           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:11:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:11:45.980322  324330 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:11:45.980459  324330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:11:45.980472  324330 out.go:304] Setting ErrFile to fd 2...
	I0229 01:11:45.980476  324330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:11:45.980732  324330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 01:11:45.981349  324330 out.go:298] Setting JSON to true
	I0229 01:11:45.982397  324330 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3249,"bootTime":1709165857,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:11:45.982478  324330 start.go:139] virtualization: kvm guest
	I0229 01:11:45.984803  324330 out.go:97] [download-only-561532] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:11:45.985004  324330 notify.go:220] Checking for updates...
	I0229 01:11:45.986637  324330 out.go:169] MINIKUBE_LOCATION=18063
	I0229 01:11:45.988405  324330 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:11:45.989913  324330 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:11:45.991233  324330 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:11:45.992571  324330 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 01:11:45.995288  324330 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 01:11:45.995555  324330 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:11:46.027331  324330 out.go:97] Using the kvm2 driver based on user configuration
	I0229 01:11:46.027383  324330 start.go:299] selected driver: kvm2
	I0229 01:11:46.027395  324330 start.go:903] validating driver "kvm2" against <nil>
	I0229 01:11:46.027883  324330 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:11:46.028006  324330 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-316644/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:11:46.044502  324330 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:11:46.044580  324330 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:11:46.045269  324330 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 01:11:46.045547  324330 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 01:11:46.045635  324330 cni.go:84] Creating CNI manager for ""
	I0229 01:11:46.045653  324330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0229 01:11:46.045665  324330 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 01:11:46.045677  324330 start_flags.go:323] config:
	{Name:download-only-561532 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-561532 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:11:46.045892  324330 iso.go:125] acquiring lock: {Name:mk974d80da9153d6536889d8696366d0e5af7a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:11:46.047706  324330 out.go:97] Starting control plane node download-only-561532 in cluster download-only-561532
	I0229 01:11:46.047723  324330 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 01:11:46.154168  324330 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0229 01:11:46.154241  324330 cache.go:56] Caching tarball of preloaded images
	I0229 01:11:46.154442  324330 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0229 01:11:46.156396  324330 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0229 01:11:46.156424  324330 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0229 01:11:46.265917  324330 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18063-316644/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-561532"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-561532
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-801156 --alsologtostderr --binary-mirror http://127.0.0.1:39823 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-801156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-801156
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (101.52s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-395379 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-395379 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m40.491234912s)
helpers_test.go:175: Cleaning up "offline-crio-395379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-395379
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-395379: (1.028818111s)
--- PASS: TestOffline (101.52s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-600097
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-600097: exit status 85 (64.636273ms)

                                                
                                                
-- stdout --
	* Profile "addons-600097" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-600097"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-600097
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-600097: exit status 85 (66.040863ms)

                                                
                                                
-- stdout --
	* Profile "addons-600097" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-600097"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (157.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-600097 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-600097 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m37.309196955s)
--- PASS: TestAddons/Setup (157.31s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 33.650378ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-q4qbx" [44db4128-7109-4402-9de5-49bec8724d9f] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005529787s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rntnp" [48e9e81a-42f9-4d1d-9354-285750cd1bd8] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005537777s
addons_test.go:340: (dbg) Run:  kubectl --context addons-600097 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-600097 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-600097 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.982109973s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-600097 addons disable registry --alsologtostderr -v=1: (1.1143155s)
--- PASS: TestAddons/parallel/Registry (18.31s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kgb4z" [c0322ab5-1e1e-484e-a540-c0ed56db9437] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004494871s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-600097
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-600097: (6.100193253s)
--- PASS: TestAddons/parallel/InspektorGadget (12.11s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.410739ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-hrq8h" [e7098420-28d2-4a6b-a93d-4fefa31359b3] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00497441s
addons_test.go:415: (dbg) Run:  kubectl --context addons-600097 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.79s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.11s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 10.20734ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-w6sfn" [d68c9fec-87de-4b51-b793-1fce3f10efe2] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.008813002s
addons_test.go:473: (dbg) Run:  kubectl --context addons-600097 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-600097 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.347960219s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.11s)

                                                
                                    
x
+
TestAddons/parallel/CSI (79.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 37.082438ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-600097 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/02/29 01:14:55 [DEBUG] GET http://192.168.39.181:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-600097 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [74ced382-368b-4f97-9124-f2ba65827e5d] Pending
helpers_test.go:344: "task-pv-pod" [74ced382-368b-4f97-9124-f2ba65827e5d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [74ced382-368b-4f97-9124-f2ba65827e5d] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005296036s
addons_test.go:584: (dbg) Run:  kubectl --context addons-600097 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-600097 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-600097 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-600097 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-600097 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-600097 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-600097 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c526f9d5-2518-443c-a464-6a72fff00c13] Pending
helpers_test.go:344: "task-pv-pod-restore" [c526f9d5-2518-443c-a464-6a72fff00c13] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c526f9d5-2518-443c-a464-6a72fff00c13] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004620932s
addons_test.go:626: (dbg) Run:  kubectl --context addons-600097 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-600097 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-600097 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-600097 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.799650498s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (79.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-600097 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-600097 --alsologtostderr -v=1: (1.676875019s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-lsqz4" [86feeed9-5827-47a5-bcb1-f939810036ba] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-lsqz4" [86feeed9-5827-47a5-bcb1-f939810036ba] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-lsqz4" [86feeed9-5827-47a5-bcb1-f939810036ba] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-lsqz4" [86feeed9-5827-47a5-bcb1-f939810036ba] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.00400267s
--- PASS: TestAddons/parallel/Headlamp (16.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-hv5w7" [49910ff0-7730-474f-9800-bfae43f13a88] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00556351s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-600097
--- PASS: TestAddons/parallel/CloudSpanner (6.80s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.71s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-600097 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-600097 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-600097 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b289fc35-8197-4641-8555-e11426bb231c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b289fc35-8197-4641-8555-e11426bb231c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b289fc35-8197-4641-8555-e11426bb231c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00631508s
addons_test.go:891: (dbg) Run:  kubectl --context addons-600097 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 ssh "cat /opt/local-path-provisioner/pvc-46cdb420-a06c-4c86-b1c5-0196b03f5f20_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-600097 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-600097 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-600097 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-600097 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.720311847s)
--- PASS: TestAddons/parallel/LocalPath (55.71s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-qmvcb" [e5aa7bf3-4864-4a99-89f8-7130c9effa51] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004647468s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-600097 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-600097 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (96.49s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-501178 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0229 02:14:09.040566  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 02:14:20.880840  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-501178 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m34.980230764s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-501178 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-501178 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-501178 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-501178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-501178
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-501178: (1.02752053s)
--- PASS: TestCertOptions (96.49s)

                                                
                                    
x
+
TestCertExpiration (277.31s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-283864 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-283864 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m4.787225675s)
E0229 02:14:37.825316  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-283864 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-283864 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (31.482319808s)
helpers_test.go:175: Cleaning up "cert-expiration-283864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-283864
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-283864: (1.039606446s)
--- PASS: TestCertExpiration (277.31s)

                                                
                                    
x
+
TestForceSystemdFlag (74.92s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-153144 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-153144 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m13.69929855s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-153144 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-153144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-153144
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-153144: (1.008200303s)
--- PASS: TestForceSystemdFlag (74.92s)

                                                
                                    
x
+
TestForceSystemdEnv (76.18s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-540640 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-540640 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m15.115224687s)
helpers_test.go:175: Cleaning up "force-systemd-env-540640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-540640
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-540640: (1.065953463s)
--- PASS: TestForceSystemdEnv (76.18s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.37s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.37s)

                                                
                                    
x
+
TestErrorSpam/setup (44.86s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-542427 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-542427 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-542427 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-542427 --driver=kvm2  --container-runtime=crio: (44.85679443s)
--- PASS: TestErrorSpam/setup (44.86s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 pause
--- PASS: TestErrorSpam/pause (1.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (2.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 stop: (2.094267365s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-542427 --log_dir /tmp/nospam-542427 stop
--- PASS: TestErrorSpam/stop (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18063-316644/.minikube/files/etc/test/nested/copy/323885/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-921098 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-921098 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m26.545802229s)
--- PASS: TestFunctional/serial/StartWithProxy (86.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.36s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-921098 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-921098 --alsologtostderr -v=8: (29.357243604s)
functional_test.go:659: soft start took 29.358255423s for "functional-921098" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.36s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-921098 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-921098 cache add registry.k8s.io/pause:3.3: (1.01716217s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-921098 cache add registry.k8s.io/pause:latest: (1.033244731s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-921098 /tmp/TestFunctionalserialCacheCmdcacheadd_local416716556/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 cache add minikube-local-cache-test:functional-921098
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-921098 cache add minikube-local-cache-test:functional-921098: (1.908050742s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 cache delete minikube-local-cache-test:functional-921098
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-921098
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-921098 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (236.831804ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 kubectl -- --context functional-921098 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-921098 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-921098 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-921098 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.070266506s)
functional_test.go:757: restart took 33.070402812s for "functional-921098" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.07s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-921098 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-921098 logs: (1.50612468s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 logs --file /tmp/TestFunctionalserialLogsFileCmd3940918278/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-921098 logs --file /tmp/TestFunctionalserialLogsFileCmd3940918278/001/logs.txt: (1.563832725s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.56s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-921098 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-921098
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-921098: exit status 115 (314.679298ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.54:30461 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-921098 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-921098 delete -f testdata/invalidsvc.yaml: (2.023653763s)
--- PASS: TestFunctional/serial/InvalidService (5.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-921098 config get cpus: exit status 14 (72.911552ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-921098 config get cpus: exit status 14 (58.256524ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-921098 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-921098 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 331447: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.10s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-921098 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-921098 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (159.329705ms)

                                                
                                                
-- stdout --
	* [functional-921098] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:24:12.059329  331345 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:24:12.059616  331345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:24:12.059628  331345 out.go:304] Setting ErrFile to fd 2...
	I0229 01:24:12.059634  331345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:24:12.059903  331345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 01:24:12.060600  331345 out.go:298] Setting JSON to false
	I0229 01:24:12.061964  331345 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3995,"bootTime":1709165857,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:24:12.062067  331345 start.go:139] virtualization: kvm guest
	I0229 01:24:12.064482  331345 out.go:177] * [functional-921098] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:24:12.066239  331345 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:24:12.067765  331345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:24:12.066271  331345 notify.go:220] Checking for updates...
	I0229 01:24:12.070342  331345 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:24:12.071696  331345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:24:12.072905  331345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:24:12.074118  331345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:24:12.075728  331345 config.go:182] Loaded profile config "functional-921098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:24:12.076109  331345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:24:12.076190  331345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:24:12.091788  331345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0229 01:24:12.092186  331345 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:24:12.092760  331345 main.go:141] libmachine: Using API Version  1
	I0229 01:24:12.092784  331345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:24:12.093139  331345 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:24:12.093313  331345 main.go:141] libmachine: (functional-921098) Calling .DriverName
	I0229 01:24:12.093576  331345 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:24:12.093913  331345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:24:12.093957  331345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:24:12.109037  331345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34995
	I0229 01:24:12.109518  331345 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:24:12.110029  331345 main.go:141] libmachine: Using API Version  1
	I0229 01:24:12.110062  331345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:24:12.110413  331345 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:24:12.110622  331345 main.go:141] libmachine: (functional-921098) Calling .DriverName
	I0229 01:24:12.143372  331345 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 01:24:12.144684  331345 start.go:299] selected driver: kvm2
	I0229 01:24:12.144704  331345 start.go:903] validating driver "kvm2" against &{Name:functional-921098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-921098 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.54 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:24:12.144878  331345 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:24:12.146956  331345 out.go:177] 
	W0229 01:24:12.148195  331345 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0229 01:24:12.149375  331345 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-921098 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-921098 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-921098 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (173.443191ms)

                                                
                                                
-- stdout --
	* [functional-921098] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:24:11.887050  331306 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:24:11.887871  331306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:24:11.887936  331306 out.go:304] Setting ErrFile to fd 2...
	I0229 01:24:11.887957  331306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:24:11.888612  331306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 01:24:11.889489  331306 out.go:298] Setting JSON to false
	I0229 01:24:11.890929  331306 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3995,"bootTime":1709165857,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:24:11.891060  331306 start.go:139] virtualization: kvm guest
	I0229 01:24:11.893550  331306 out.go:177] * [functional-921098] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0229 01:24:11.895668  331306 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:24:11.895685  331306 notify.go:220] Checking for updates...
	I0229 01:24:11.897542  331306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:24:11.899149  331306 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 01:24:11.900679  331306 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 01:24:11.902241  331306 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:24:11.903820  331306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:24:11.905796  331306 config.go:182] Loaded profile config "functional-921098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:24:11.906438  331306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:24:11.906523  331306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:24:11.922804  331306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0229 01:24:11.923269  331306 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:24:11.923883  331306 main.go:141] libmachine: Using API Version  1
	I0229 01:24:11.923904  331306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:24:11.924273  331306 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:24:11.924473  331306 main.go:141] libmachine: (functional-921098) Calling .DriverName
	I0229 01:24:11.924735  331306 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:24:11.925051  331306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:24:11.925113  331306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:24:11.942662  331306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46305
	I0229 01:24:11.943215  331306 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:24:11.943750  331306 main.go:141] libmachine: Using API Version  1
	I0229 01:24:11.943776  331306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:24:11.944128  331306 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:24:11.944381  331306 main.go:141] libmachine: (functional-921098) Calling .DriverName
	I0229 01:24:11.982487  331306 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0229 01:24:11.983627  331306 start.go:299] selected driver: kvm2
	I0229 01:24:11.983643  331306 start.go:903] validating driver "kvm2" against &{Name:functional-921098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-921098 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.54 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:24:11.983789  331306 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:24:11.985986  331306 out.go:177] 
	W0229 01:24:11.987699  331306 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0229 01:24:11.988976  331306 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-921098 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-921098 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-jc4qf" [541599bd-c4df-48f2-a52e-322557ac62ac] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-jc4qf" [541599bd-c4df-48f2-a52e-322557ac62ac] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004736235s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.54:30492
functional_test.go:1671: http://192.168.39.54:30492: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-jc4qf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.54:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.54:30492
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (54.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fda50a42-31a1-43ef-9860-f1a4cf1cd316] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00455493s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-921098 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-921098 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-921098 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-921098 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-921098 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [999c4728-20be-4da5-a1d5-52fd10088f58] Pending
helpers_test.go:344: "sp-pod" [999c4728-20be-4da5-a1d5-52fd10088f58] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [999c4728-20be-4da5-a1d5-52fd10088f58] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.005421371s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-921098 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-921098 delete -f testdata/storage-provisioner/pod.yaml
E0229 01:24:42.947854  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-921098 delete -f testdata/storage-provisioner/pod.yaml: (3.453568231s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-921098 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5db7bbba-e979-4d17-8936-1ca031f328be] Pending
helpers_test.go:344: "sp-pod" [5db7bbba-e979-4d17-8936-1ca031f328be] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0229 01:24:48.068138  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [5db7bbba-e979-4d17-8936-1ca031f328be] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.006323204s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-921098 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (54.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh -n functional-921098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 cp functional-921098:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3974552405/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh -n functional-921098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh -n functional-921098 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (33.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-921098 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-9gtd5" [5670670c-777a-482d-b7d8-0b23bc82bca6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-9gtd5" [5670670c-777a-482d-b7d8-0b23bc82bca6] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.006031551s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-921098 exec mysql-859648c796-9gtd5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-921098 exec mysql-859648c796-9gtd5 -- mysql -ppassword -e "show databases;": exit status 1 (354.045722ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-921098 exec mysql-859648c796-9gtd5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-921098 exec mysql-859648c796-9gtd5 -- mysql -ppassword -e "show databases;": exit status 1 (141.567248ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0229 01:24:58.308891  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-921098 exec mysql-859648c796-9gtd5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-921098 exec mysql-859648c796-9gtd5 -- mysql -ppassword -e "show databases;": exit status 1 (158.325704ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-921098 exec mysql-859648c796-9gtd5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (33.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/323885/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "sudo cat /etc/test/nested/copy/323885/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/323885.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "sudo cat /etc/ssl/certs/323885.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/323885.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "sudo cat /usr/share/ca-certificates/323885.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3238852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "sudo cat /etc/ssl/certs/3238852.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3238852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "sudo cat /usr/share/ca-certificates/3238852.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-921098 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-921098 ssh "sudo systemctl is-active docker": exit status 1 (244.274042ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-921098 ssh "sudo systemctl is-active containerd": exit status 1 (239.340656ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-921098 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-921098 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-zlqs2" [c8c9bc17-5da6-4963-910e-ec8a943bb2ee] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-zlqs2" [c8c9bc17-5da6-4963-910e-ec8a943bb2ee] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004372921s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "247.910376ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "63.59901ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "254.13202ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "61.824201ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-921098 /tmp/TestFunctionalparallelMountCmdany-port3749541065/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709169850397135809" to /tmp/TestFunctionalparallelMountCmdany-port3749541065/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709169850397135809" to /tmp/TestFunctionalparallelMountCmdany-port3749541065/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709169850397135809" to /tmp/TestFunctionalparallelMountCmdany-port3749541065/001/test-1709169850397135809
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-921098 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.314647ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 29 01:24 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 29 01:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 29 01:24 test-1709169850397135809
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh cat /mount-9p/test-1709169850397135809
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-921098 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bd142a80-c56f-4752-9983-a8300de7895d] Pending
helpers_test.go:344: "busybox-mount" [bd142a80-c56f-4752-9983-a8300de7895d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bd142a80-c56f-4752-9983-a8300de7895d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bd142a80-c56f-4752-9983-a8300de7895d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004615735s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-921098 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-921098 /tmp/TestFunctionalparallelMountCmdany-port3749541065/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-921098 /tmp/TestFunctionalparallelMountCmdspecific-port14794353/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-921098 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.849401ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-921098 /tmp/TestFunctionalparallelMountCmdspecific-port14794353/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-921098 ssh "sudo umount -f /mount-9p": exit status 1 (315.931096ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-921098 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-921098 /tmp/TestFunctionalparallelMountCmdspecific-port14794353/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 service list -o json
functional_test.go:1490: Took "532.785048ms" to run "out/minikube-linux-amd64 -p functional-921098 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.54:31204
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-921098 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2189985645/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-921098 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2189985645/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-921098 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2189985645/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-921098 ssh "findmnt -T" /mount1: exit status 1 (322.163233ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-921098 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-921098 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2189985645/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-921098 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2189985645/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-921098 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2189985645/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.54:31204
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-921098 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-921098
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-921098
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-921098 image ls --format short --alsologtostderr:
I0229 01:24:56.256985  333168 out.go:291] Setting OutFile to fd 1 ...
I0229 01:24:56.257303  333168 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:24:56.257315  333168 out.go:304] Setting ErrFile to fd 2...
I0229 01:24:56.257319  333168 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:24:56.257524  333168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
I0229 01:24:56.258197  333168 config.go:182] Loaded profile config "functional-921098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 01:24:56.258342  333168 config.go:182] Loaded profile config "functional-921098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 01:24:56.258757  333168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 01:24:56.258864  333168 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:24:56.275547  333168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35267
I0229 01:24:56.276141  333168 main.go:141] libmachine: () Calling .GetVersion
I0229 01:24:56.276886  333168 main.go:141] libmachine: Using API Version  1
I0229 01:24:56.276914  333168 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:24:56.277472  333168 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:24:56.277883  333168 main.go:141] libmachine: (functional-921098) Calling .GetState
I0229 01:24:56.280285  333168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 01:24:56.280344  333168 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:24:56.296292  333168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39521
I0229 01:24:56.296733  333168 main.go:141] libmachine: () Calling .GetVersion
I0229 01:24:56.297268  333168 main.go:141] libmachine: Using API Version  1
I0229 01:24:56.297296  333168 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:24:56.297625  333168 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:24:56.297826  333168 main.go:141] libmachine: (functional-921098) Calling .DriverName
I0229 01:24:56.298036  333168 ssh_runner.go:195] Run: systemctl --version
I0229 01:24:56.298061  333168 main.go:141] libmachine: (functional-921098) Calling .GetSSHHostname
I0229 01:24:56.301307  333168 main.go:141] libmachine: (functional-921098) DBG | domain functional-921098 has defined MAC address 52:54:00:04:90:72 in network mk-functional-921098
I0229 01:24:56.301712  333168 main.go:141] libmachine: (functional-921098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:90:72", ip: ""} in network mk-functional-921098: {Iface:virbr1 ExpiryTime:2024-02-29 02:21:39 +0000 UTC Type:0 Mac:52:54:00:04:90:72 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-921098 Clientid:01:52:54:00:04:90:72}
I0229 01:24:56.301750  333168 main.go:141] libmachine: (functional-921098) DBG | domain functional-921098 has defined IP address 192.168.39.54 and MAC address 52:54:00:04:90:72 in network mk-functional-921098
I0229 01:24:56.301872  333168 main.go:141] libmachine: (functional-921098) Calling .GetSSHPort
I0229 01:24:56.302052  333168 main.go:141] libmachine: (functional-921098) Calling .GetSSHKeyPath
I0229 01:24:56.302200  333168 main.go:141] libmachine: (functional-921098) Calling .GetSSHUsername
I0229 01:24:56.302408  333168 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/functional-921098/id_rsa Username:docker}
I0229 01:24:56.389301  333168 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 01:24:56.475481  333168 main.go:141] libmachine: Making call to close driver server
I0229 01:24:56.475497  333168 main.go:141] libmachine: (functional-921098) Calling .Close
I0229 01:24:56.475821  333168 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:24:56.475855  333168 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 01:24:56.475866  333168 main.go:141] libmachine: Making call to close driver server
I0229 01:24:56.475878  333168 main.go:141] libmachine: (functional-921098) Calling .Close
I0229 01:24:56.476158  333168 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:24:56.476169  333168 main.go:141] libmachine: (functional-921098) DBG | Closing plugin on server side
I0229 01:24:56.476177  333168 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-921098 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/library/nginx                 | latest             | e4720093a3c13 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-921098  | ffd4cfbbe753e | 34.1MB |
| localhost/minikube-local-cache-test     | functional-921098  | db26a95fd106f | 3.35kB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-921098 image ls --format table --alsologtostderr:
I0229 01:24:56.819557  333267 out.go:291] Setting OutFile to fd 1 ...
I0229 01:24:56.819738  333267 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:24:56.819757  333267 out.go:304] Setting ErrFile to fd 2...
I0229 01:24:56.819766  333267 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:24:56.820115  333267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
I0229 01:24:56.821000  333267 config.go:182] Loaded profile config "functional-921098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 01:24:56.821157  333267 config.go:182] Loaded profile config "functional-921098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 01:24:56.821785  333267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 01:24:56.821913  333267 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:24:56.840035  333267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45147
I0229 01:24:56.840569  333267 main.go:141] libmachine: () Calling .GetVersion
I0229 01:24:56.841160  333267 main.go:141] libmachine: Using API Version  1
I0229 01:24:56.841185  333267 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:24:56.841591  333267 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:24:56.841834  333267 main.go:141] libmachine: (functional-921098) Calling .GetState
I0229 01:24:56.843875  333267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 01:24:56.843926  333267 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:24:56.859812  333267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41773
I0229 01:24:56.860324  333267 main.go:141] libmachine: () Calling .GetVersion
I0229 01:24:56.861052  333267 main.go:141] libmachine: Using API Version  1
I0229 01:24:56.861080  333267 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:24:56.861616  333267 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:24:56.861786  333267 main.go:141] libmachine: (functional-921098) Calling .DriverName
I0229 01:24:56.862023  333267 ssh_runner.go:195] Run: systemctl --version
I0229 01:24:56.862057  333267 main.go:141] libmachine: (functional-921098) Calling .GetSSHHostname
I0229 01:24:56.865327  333267 main.go:141] libmachine: (functional-921098) DBG | domain functional-921098 has defined MAC address 52:54:00:04:90:72 in network mk-functional-921098
I0229 01:24:56.866090  333267 main.go:141] libmachine: (functional-921098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:90:72", ip: ""} in network mk-functional-921098: {Iface:virbr1 ExpiryTime:2024-02-29 02:21:39 +0000 UTC Type:0 Mac:52:54:00:04:90:72 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-921098 Clientid:01:52:54:00:04:90:72}
I0229 01:24:56.866125  333267 main.go:141] libmachine: (functional-921098) DBG | domain functional-921098 has defined IP address 192.168.39.54 and MAC address 52:54:00:04:90:72 in network mk-functional-921098
I0229 01:24:56.866339  333267 main.go:141] libmachine: (functional-921098) Calling .GetSSHPort
I0229 01:24:56.866544  333267 main.go:141] libmachine: (functional-921098) Calling .GetSSHKeyPath
I0229 01:24:56.866854  333267 main.go:141] libmachine: (functional-921098) Calling .GetSSHUsername
I0229 01:24:56.867020  333267 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/functional-921098/id_rsa Username:docker}
I0229 01:24:56.980116  333267 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 01:24:57.094267  333267 main.go:141] libmachine: Making call to close driver server
I0229 01:24:57.094291  333267 main.go:141] libmachine: (functional-921098) Calling .Close
I0229 01:24:57.094604  333267 main.go:141] libmachine: (functional-921098) DBG | Closing plugin on server side
I0229 01:24:57.094623  333267 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:24:57.094637  333267 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 01:24:57.094658  333267 main.go:141] libmachine: Making call to close driver server
I0229 01:24:57.094666  333267 main.go:141] libmachine: (functional-921098) Calling .Close
I0229 01:24:57.094949  333267 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:24:57.094962  333267 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-921098 image ls --format json --alsologtostderr:
[{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"83f6c
c407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-921098"],"size":"34114467"},{"id":"da86e6ba6ca197bf6bc5e9d900fe
bd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"db26a95fd106f1cb59a36f15c613ce94d8b954ee3a6dafe88c3d510cc03ebe0e","repoDigests":["localhost/minikube-local-cache-test@sha256:5e862187f0f854f4e6e462f1bf8eef7bbeb5bd8e62e7ed69b59028ae7effffd1"],"re
poTags":["localhost/minikube-local-cache-test:functional-921098"],"size":"3345"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","reg
istry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256
:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":["docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71","docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865895"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","do
cker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-921098 image ls --format json --alsologtostderr:
I0229 01:24:56.545146  333214 out.go:291] Setting OutFile to fd 1 ...
I0229 01:24:56.545425  333214 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:24:56.545435  333214 out.go:304] Setting ErrFile to fd 2...
I0229 01:24:56.545439  333214 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:24:56.545653  333214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
I0229 01:24:56.546340  333214 config.go:182] Loaded profile config "functional-921098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 01:24:56.546455  333214 config.go:182] Loaded profile config "functional-921098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 01:24:56.546899  333214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 01:24:56.546951  333214 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:24:56.563732  333214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
I0229 01:24:56.564317  333214 main.go:141] libmachine: () Calling .GetVersion
I0229 01:24:56.564961  333214 main.go:141] libmachine: Using API Version  1
I0229 01:24:56.564981  333214 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:24:56.565431  333214 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:24:56.565623  333214 main.go:141] libmachine: (functional-921098) Calling .GetState
I0229 01:24:56.567661  333214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 01:24:56.567701  333214 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:24:56.583359  333214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
I0229 01:24:56.583842  333214 main.go:141] libmachine: () Calling .GetVersion
I0229 01:24:56.584321  333214 main.go:141] libmachine: Using API Version  1
I0229 01:24:56.584344  333214 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:24:56.584688  333214 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:24:56.584892  333214 main.go:141] libmachine: (functional-921098) Calling .DriverName
I0229 01:24:56.585107  333214 ssh_runner.go:195] Run: systemctl --version
I0229 01:24:56.585135  333214 main.go:141] libmachine: (functional-921098) Calling .GetSSHHostname
I0229 01:24:56.587951  333214 main.go:141] libmachine: (functional-921098) DBG | domain functional-921098 has defined MAC address 52:54:00:04:90:72 in network mk-functional-921098
I0229 01:24:56.588491  333214 main.go:141] libmachine: (functional-921098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:90:72", ip: ""} in network mk-functional-921098: {Iface:virbr1 ExpiryTime:2024-02-29 02:21:39 +0000 UTC Type:0 Mac:52:54:00:04:90:72 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-921098 Clientid:01:52:54:00:04:90:72}
I0229 01:24:56.588525  333214 main.go:141] libmachine: (functional-921098) DBG | domain functional-921098 has defined IP address 192.168.39.54 and MAC address 52:54:00:04:90:72 in network mk-functional-921098
I0229 01:24:56.588620  333214 main.go:141] libmachine: (functional-921098) Calling .GetSSHPort
I0229 01:24:56.588814  333214 main.go:141] libmachine: (functional-921098) Calling .GetSSHKeyPath
I0229 01:24:56.589015  333214 main.go:141] libmachine: (functional-921098) Calling .GetSSHUsername
I0229 01:24:56.589160  333214 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/functional-921098/id_rsa Username:docker}
I0229 01:24:56.688651  333214 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 01:24:56.747309  333214 main.go:141] libmachine: Making call to close driver server
I0229 01:24:56.747329  333214 main.go:141] libmachine: (functional-921098) Calling .Close
I0229 01:24:56.747732  333214 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:24:56.747729  333214 main.go:141] libmachine: (functional-921098) DBG | Closing plugin on server side
I0229 01:24:56.747753  333214 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 01:24:56.747763  333214 main.go:141] libmachine: Making call to close driver server
I0229 01:24:56.747772  333214 main.go:141] libmachine: (functional-921098) Calling .Close
I0229 01:24:56.748011  333214 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:24:56.748027  333214 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-921098 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: db26a95fd106f1cb59a36f15c613ce94d8b954ee3a6dafe88c3d510cc03ebe0e
repoDigests:
- localhost/minikube-local-cache-test@sha256:5e862187f0f854f4e6e462f1bf8eef7bbeb5bd8e62e7ed69b59028ae7effffd1
repoTags:
- localhost/minikube-local-cache-test:functional-921098
size: "3345"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-921098
size: "34114467"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests:
- docker.io/library/nginx@sha256:05aa73005987caaed48ea8213696b0df761ccd600d2c53fc0a1a97a180301d71
- docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107
repoTags:
- docker.io/library/nginx:latest
size: "190865895"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-921098 image ls --format yaml --alsologtostderr:
I0229 01:24:56.263778  333169 out.go:291] Setting OutFile to fd 1 ...
I0229 01:24:56.263952  333169 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:24:56.263967  333169 out.go:304] Setting ErrFile to fd 2...
I0229 01:24:56.263974  333169 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:24:56.264317  333169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
I0229 01:24:56.265978  333169 config.go:182] Loaded profile config "functional-921098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 01:24:56.266175  333169 config.go:182] Loaded profile config "functional-921098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 01:24:56.267486  333169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 01:24:56.267572  333169 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:24:56.284750  333169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38945
I0229 01:24:56.285344  333169 main.go:141] libmachine: () Calling .GetVersion
I0229 01:24:56.286033  333169 main.go:141] libmachine: Using API Version  1
I0229 01:24:56.286058  333169 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:24:56.286444  333169 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:24:56.286667  333169 main.go:141] libmachine: (functional-921098) Calling .GetState
I0229 01:24:56.288728  333169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 01:24:56.288777  333169 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:24:56.309108  333169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38797
I0229 01:24:56.309609  333169 main.go:141] libmachine: () Calling .GetVersion
I0229 01:24:56.310170  333169 main.go:141] libmachine: Using API Version  1
I0229 01:24:56.310194  333169 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:24:56.310534  333169 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:24:56.310817  333169 main.go:141] libmachine: (functional-921098) Calling .DriverName
I0229 01:24:56.311091  333169 ssh_runner.go:195] Run: systemctl --version
I0229 01:24:56.311124  333169 main.go:141] libmachine: (functional-921098) Calling .GetSSHHostname
I0229 01:24:56.314009  333169 main.go:141] libmachine: (functional-921098) DBG | domain functional-921098 has defined MAC address 52:54:00:04:90:72 in network mk-functional-921098
I0229 01:24:56.314456  333169 main.go:141] libmachine: (functional-921098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:90:72", ip: ""} in network mk-functional-921098: {Iface:virbr1 ExpiryTime:2024-02-29 02:21:39 +0000 UTC Type:0 Mac:52:54:00:04:90:72 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-921098 Clientid:01:52:54:00:04:90:72}
I0229 01:24:56.314487  333169 main.go:141] libmachine: (functional-921098) DBG | domain functional-921098 has defined IP address 192.168.39.54 and MAC address 52:54:00:04:90:72 in network mk-functional-921098
I0229 01:24:56.314637  333169 main.go:141] libmachine: (functional-921098) Calling .GetSSHPort
I0229 01:24:56.314821  333169 main.go:141] libmachine: (functional-921098) Calling .GetSSHKeyPath
I0229 01:24:56.314984  333169 main.go:141] libmachine: (functional-921098) Calling .GetSSHUsername
I0229 01:24:56.315136  333169 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/functional-921098/id_rsa Username:docker}
I0229 01:24:56.412050  333169 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 01:24:56.534843  333169 main.go:141] libmachine: Making call to close driver server
I0229 01:24:56.534856  333169 main.go:141] libmachine: (functional-921098) Calling .Close
I0229 01:24:56.535196  333169 main.go:141] libmachine: (functional-921098) DBG | Closing plugin on server side
I0229 01:24:56.535244  333169 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:24:56.535264  333169 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 01:24:56.535274  333169 main.go:141] libmachine: Making call to close driver server
I0229 01:24:56.535287  333169 main.go:141] libmachine: (functional-921098) Calling .Close
I0229 01:24:56.535549  333169 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:24:56.535570  333169 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-921098 ssh pgrep buildkitd: exit status 1 (231.923141ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image build -t localhost/my-image:functional-921098 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-921098 image build -t localhost/my-image:functional-921098 testdata/build --alsologtostderr: (3.32395808s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-921098 image build -t localhost/my-image:functional-921098 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7152221a057
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-921098
--> fc7c3d45d8f
Successfully tagged localhost/my-image:functional-921098
fc7c3d45d8fdf77ade48ec0cb87e44a222785c8ed36c333a9d8d0558e07925db
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-921098 image build -t localhost/my-image:functional-921098 testdata/build --alsologtostderr:
I0229 01:24:56.843622  333278 out.go:291] Setting OutFile to fd 1 ...
I0229 01:24:56.843757  333278 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:24:56.843769  333278 out.go:304] Setting ErrFile to fd 2...
I0229 01:24:56.843776  333278 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:24:56.844396  333278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
I0229 01:24:56.846160  333278 config.go:182] Loaded profile config "functional-921098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 01:24:56.846747  333278 config.go:182] Loaded profile config "functional-921098": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0229 01:24:56.847245  333278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 01:24:56.847284  333278 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:24:56.862944  333278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
I0229 01:24:56.863451  333278 main.go:141] libmachine: () Calling .GetVersion
I0229 01:24:56.864208  333278 main.go:141] libmachine: Using API Version  1
I0229 01:24:56.864230  333278 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:24:56.864700  333278 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:24:56.864889  333278 main.go:141] libmachine: (functional-921098) Calling .GetState
I0229 01:24:56.867535  333278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0229 01:24:56.867574  333278 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:24:56.884674  333278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39949
I0229 01:24:56.885135  333278 main.go:141] libmachine: () Calling .GetVersion
I0229 01:24:56.885508  333278 main.go:141] libmachine: Using API Version  1
I0229 01:24:56.885525  333278 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:24:56.885987  333278 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:24:56.886164  333278 main.go:141] libmachine: (functional-921098) Calling .DriverName
I0229 01:24:56.886415  333278 ssh_runner.go:195] Run: systemctl --version
I0229 01:24:56.886446  333278 main.go:141] libmachine: (functional-921098) Calling .GetSSHHostname
I0229 01:24:56.891015  333278 main.go:141] libmachine: (functional-921098) DBG | domain functional-921098 has defined MAC address 52:54:00:04:90:72 in network mk-functional-921098
I0229 01:24:56.891437  333278 main.go:141] libmachine: (functional-921098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:90:72", ip: ""} in network mk-functional-921098: {Iface:virbr1 ExpiryTime:2024-02-29 02:21:39 +0000 UTC Type:0 Mac:52:54:00:04:90:72 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-921098 Clientid:01:52:54:00:04:90:72}
I0229 01:24:56.891458  333278 main.go:141] libmachine: (functional-921098) DBG | domain functional-921098 has defined IP address 192.168.39.54 and MAC address 52:54:00:04:90:72 in network mk-functional-921098
I0229 01:24:56.891608  333278 main.go:141] libmachine: (functional-921098) Calling .GetSSHPort
I0229 01:24:56.891734  333278 main.go:141] libmachine: (functional-921098) Calling .GetSSHKeyPath
I0229 01:24:56.891852  333278 main.go:141] libmachine: (functional-921098) Calling .GetSSHUsername
I0229 01:24:56.891959  333278 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/functional-921098/id_rsa Username:docker}
I0229 01:24:57.038347  333278 build_images.go:151] Building image from path: /tmp/build.980313924.tar
I0229 01:24:57.038428  333278 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0229 01:24:57.083532  333278 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.980313924.tar
I0229 01:24:57.100105  333278 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.980313924.tar: stat -c "%s %y" /var/lib/minikube/build/build.980313924.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.980313924.tar': No such file or directory
I0229 01:24:57.100134  333278 ssh_runner.go:362] scp /tmp/build.980313924.tar --> /var/lib/minikube/build/build.980313924.tar (3072 bytes)
I0229 01:24:57.164990  333278 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.980313924
I0229 01:24:57.176535  333278 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.980313924 -xf /var/lib/minikube/build/build.980313924.tar
I0229 01:24:57.187672  333278 crio.go:297] Building image: /var/lib/minikube/build/build.980313924
I0229 01:24:57.187758  333278 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-921098 /var/lib/minikube/build/build.980313924 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0229 01:25:00.067890  333278 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-921098 /var/lib/minikube/build/build.980313924 --cgroup-manager=cgroupfs: (2.880098459s)
I0229 01:25:00.067963  333278 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.980313924
I0229 01:25:00.080949  333278 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.980313924.tar
I0229 01:25:00.094458  333278 build_images.go:207] Built localhost/my-image:functional-921098 from /tmp/build.980313924.tar
I0229 01:25:00.094504  333278 build_images.go:123] succeeded building to: functional-921098
I0229 01:25:00.094527  333278 build_images.go:124] failed building to: 
I0229 01:25:00.094561  333278 main.go:141] libmachine: Making call to close driver server
I0229 01:25:00.094576  333278 main.go:141] libmachine: (functional-921098) Calling .Close
I0229 01:25:00.094920  333278 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:25:00.094929  333278 main.go:141] libmachine: (functional-921098) DBG | Closing plugin on server side
I0229 01:25:00.094943  333278 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 01:25:00.094953  333278 main.go:141] libmachine: Making call to close driver server
I0229 01:25:00.094965  333278 main.go:141] libmachine: (functional-921098) Calling .Close
I0229 01:25:00.095201  333278 main.go:141] libmachine: (functional-921098) DBG | Closing plugin on server side
I0229 01:25:00.095232  333278 main.go:141] libmachine: Successfully made call to close driver server
I0229 01:25:00.095244  333278 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.082897054s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-921098
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image load --daemon gcr.io/google-containers/addon-resizer:functional-921098 --alsologtostderr
2024/02/29 01:24:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-921098 image load --daemon gcr.io/google-containers/addon-resizer:functional-921098 --alsologtostderr: (3.974635258s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image load --daemon gcr.io/google-containers/addon-resizer:functional-921098 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-921098 image load --daemon gcr.io/google-containers/addon-resizer:functional-921098 --alsologtostderr: (4.506505818s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.074469826s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-921098
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image load --daemon gcr.io/google-containers/addon-resizer:functional-921098 --alsologtostderr
E0229 01:24:37.825371  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:24:37.831972  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:24:37.842310  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:24:37.863512  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:24:37.903881  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:24:37.984260  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:24:38.144597  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:24:38.465219  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:24:39.106271  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:24:40.386769  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-921098 image load --daemon gcr.io/google-containers/addon-resizer:functional-921098 --alsologtostderr: (12.909265331s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (15.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image save gcr.io/google-containers/addon-resizer:functional-921098 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-921098 image save gcr.io/google-containers/addon-resizer:functional-921098 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.406389415s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image rm gcr.io/google-containers/addon-resizer:functional-921098 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-921098 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.932128586s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-921098
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-921098 image save --daemon gcr.io/google-containers/addon-resizer:functional-921098 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-921098 image save --daemon gcr.io/google-containers/addon-resizer:functional-921098 --alsologtostderr: (1.276566205s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-921098
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-921098
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-921098
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-921098
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (98.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-129623 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0229 01:34:09.040602  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-129623 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m38.726656199s)
--- PASS: TestJSONOutput/start/Command (98.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-129623 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-129623 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-129623 --output=json --user=testUser
E0229 01:34:36.725229  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:34:37.825502  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-129623 --output=json --user=testUser: (7.112305871s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-660133 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-660133 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.942619ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4e351e38-ab80-4d50-b6b5-a79ef37de292","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-660133] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e002a2c1-fe64-4429-bc60-f068574c72c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18063"}}
	{"specversion":"1.0","id":"33f111ff-cad3-42a3-b297-89f32ba19fa4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bbabe3b7-b3ea-4803-aef7-00c837346516","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig"}}
	{"specversion":"1.0","id":"174318ad-7c9e-447e-b7f1-bdf0510487e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube"}}
	{"specversion":"1.0","id":"7392a6e7-4b29-4af2-b83b-6338504fe136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bde980d0-3821-4cb9-abcc-c06ea6c991ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d3246422-8587-47a5-af3e-f4684d9e8ea3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-660133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-660133
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (94.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-098045 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-098045 --driver=kvm2  --container-runtime=crio: (44.891705103s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-100585 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-100585 --driver=kvm2  --container-runtime=crio: (46.773233843s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-098045
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-100585
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-100585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-100585
helpers_test.go:175: Cleaning up "first-098045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-098045
--- PASS: TestMinikubeProfile (94.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-083975 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-083975 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.858053476s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-083975 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-083975 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-102671 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-102671 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.977274404s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102671 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102671 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-083975 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102671 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102671 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-102671
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-102671: (1.246529744s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-107035 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0229 01:39:09.040169  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-107035 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m46.628143159s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-107035 -- rollout status deployment/busybox: (3.793087946s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- exec busybox-5b5d89c9d6-dpkx5 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- exec busybox-5b5d89c9d6-gz4cd -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- exec busybox-5b5d89c9d6-dpkx5 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- exec busybox-5b5d89c9d6-gz4cd -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- exec busybox-5b5d89c9d6-dpkx5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- exec busybox-5b5d89c9d6-gz4cd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- exec busybox-5b5d89c9d6-dpkx5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- exec busybox-5b5d89c9d6-dpkx5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- exec busybox-5b5d89c9d6-gz4cd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107035 -- exec busybox-5b5d89c9d6-gz4cd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-107035 -v 3 --alsologtostderr
E0229 01:39:37.825156  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-107035 -v 3 --alsologtostderr: (41.203969491s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.79s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-107035 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 cp testdata/cp-test.txt multinode-107035:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 cp multinode-107035:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4129256065/001/cp-test_multinode-107035.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 cp multinode-107035:/home/docker/cp-test.txt multinode-107035-m02:/home/docker/cp-test_multinode-107035_multinode-107035-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035-m02 "sudo cat /home/docker/cp-test_multinode-107035_multinode-107035-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 cp multinode-107035:/home/docker/cp-test.txt multinode-107035-m03:/home/docker/cp-test_multinode-107035_multinode-107035-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035-m03 "sudo cat /home/docker/cp-test_multinode-107035_multinode-107035-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 cp testdata/cp-test.txt multinode-107035-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 cp multinode-107035-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4129256065/001/cp-test_multinode-107035-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 cp multinode-107035-m02:/home/docker/cp-test.txt multinode-107035:/home/docker/cp-test_multinode-107035-m02_multinode-107035.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035 "sudo cat /home/docker/cp-test_multinode-107035-m02_multinode-107035.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 cp multinode-107035-m02:/home/docker/cp-test.txt multinode-107035-m03:/home/docker/cp-test_multinode-107035-m02_multinode-107035-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035-m03 "sudo cat /home/docker/cp-test_multinode-107035-m02_multinode-107035-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 cp testdata/cp-test.txt multinode-107035-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 cp multinode-107035-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4129256065/001/cp-test_multinode-107035-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 cp multinode-107035-m03:/home/docker/cp-test.txt multinode-107035:/home/docker/cp-test_multinode-107035-m03_multinode-107035.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035 "sudo cat /home/docker/cp-test_multinode-107035-m03_multinode-107035.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 cp multinode-107035-m03:/home/docker/cp-test.txt multinode-107035-m02:/home/docker/cp-test_multinode-107035-m03_multinode-107035-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 ssh -n multinode-107035-m02 "sudo cat /home/docker/cp-test_multinode-107035-m03_multinode-107035-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-107035 node stop m03: (1.418117893s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-107035 status: exit status 7 (459.827699ms)

                                                
                                                
-- stdout --
	multinode-107035
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-107035-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-107035-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-107035 status --alsologtostderr: exit status 7 (436.604299ms)

                                                
                                                
-- stdout --
	multinode-107035
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-107035-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-107035-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:40:25.136161  340304 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:40:25.136294  340304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:40:25.136304  340304 out.go:304] Setting ErrFile to fd 2...
	I0229 01:40:25.136311  340304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:40:25.136530  340304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 01:40:25.136731  340304 out.go:298] Setting JSON to false
	I0229 01:40:25.136769  340304 mustload.go:65] Loading cluster: multinode-107035
	I0229 01:40:25.136827  340304 notify.go:220] Checking for updates...
	I0229 01:40:25.137174  340304 config.go:182] Loaded profile config "multinode-107035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 01:40:25.137193  340304 status.go:255] checking status of multinode-107035 ...
	I0229 01:40:25.137605  340304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:40:25.137691  340304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:40:25.153404  340304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
	I0229 01:40:25.153835  340304 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:40:25.154350  340304 main.go:141] libmachine: Using API Version  1
	I0229 01:40:25.154374  340304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:40:25.154766  340304 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:40:25.154955  340304 main.go:141] libmachine: (multinode-107035) Calling .GetState
	I0229 01:40:25.156460  340304 status.go:330] multinode-107035 host status = "Running" (err=<nil>)
	I0229 01:40:25.156484  340304 host.go:66] Checking if "multinode-107035" exists ...
	I0229 01:40:25.156765  340304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:40:25.156802  340304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:40:25.171730  340304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
	I0229 01:40:25.172109  340304 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:40:25.172538  340304 main.go:141] libmachine: Using API Version  1
	I0229 01:40:25.172559  340304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:40:25.172901  340304 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:40:25.173064  340304 main.go:141] libmachine: (multinode-107035) Calling .GetIP
	I0229 01:40:25.175544  340304 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:40:25.175966  340304 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:37:55 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:40:25.176003  340304 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:40:25.176119  340304 host.go:66] Checking if "multinode-107035" exists ...
	I0229 01:40:25.176435  340304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:40:25.176476  340304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:40:25.191521  340304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0229 01:40:25.191924  340304 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:40:25.192387  340304 main.go:141] libmachine: Using API Version  1
	I0229 01:40:25.192418  340304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:40:25.192720  340304 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:40:25.192897  340304 main.go:141] libmachine: (multinode-107035) Calling .DriverName
	I0229 01:40:25.193064  340304 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 01:40:25.193082  340304 main.go:141] libmachine: (multinode-107035) Calling .GetSSHHostname
	I0229 01:40:25.195614  340304 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:40:25.195985  340304 main.go:141] libmachine: (multinode-107035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:8b:7f", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:37:55 +0000 UTC Type:0 Mac:52:54:00:dd:8b:7f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-107035 Clientid:01:52:54:00:dd:8b:7f}
	I0229 01:40:25.196009  340304 main.go:141] libmachine: (multinode-107035) DBG | domain multinode-107035 has defined IP address 192.168.39.183 and MAC address 52:54:00:dd:8b:7f in network mk-multinode-107035
	I0229 01:40:25.196121  340304 main.go:141] libmachine: (multinode-107035) Calling .GetSSHPort
	I0229 01:40:25.196276  340304 main.go:141] libmachine: (multinode-107035) Calling .GetSSHKeyPath
	I0229 01:40:25.196417  340304 main.go:141] libmachine: (multinode-107035) Calling .GetSSHUsername
	I0229 01:40:25.196553  340304 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035/id_rsa Username:docker}
	I0229 01:40:25.275392  340304 ssh_runner.go:195] Run: systemctl --version
	I0229 01:40:25.282271  340304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:40:25.296998  340304 kubeconfig.go:92] found "multinode-107035" server: "https://192.168.39.183:8443"
	I0229 01:40:25.297029  340304 api_server.go:166] Checking apiserver status ...
	I0229 01:40:25.297063  340304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:40:25.315130  340304 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1076/cgroup
	W0229 01:40:25.325235  340304 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1076/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:40:25.325290  340304 ssh_runner.go:195] Run: ls
	I0229 01:40:25.330257  340304 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0229 01:40:25.334914  340304 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0229 01:40:25.334938  340304 status.go:421] multinode-107035 apiserver status = Running (err=<nil>)
	I0229 01:40:25.334948  340304 status.go:257] multinode-107035 status: &{Name:multinode-107035 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 01:40:25.334967  340304 status.go:255] checking status of multinode-107035-m02 ...
	I0229 01:40:25.335250  340304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:40:25.335296  340304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:40:25.351343  340304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37463
	I0229 01:40:25.351798  340304 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:40:25.352331  340304 main.go:141] libmachine: Using API Version  1
	I0229 01:40:25.352355  340304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:40:25.352764  340304 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:40:25.352973  340304 main.go:141] libmachine: (multinode-107035-m02) Calling .GetState
	I0229 01:40:25.354719  340304 status.go:330] multinode-107035-m02 host status = "Running" (err=<nil>)
	I0229 01:40:25.354737  340304 host.go:66] Checking if "multinode-107035-m02" exists ...
	I0229 01:40:25.355078  340304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:40:25.355142  340304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:40:25.370156  340304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41949
	I0229 01:40:25.370597  340304 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:40:25.371129  340304 main.go:141] libmachine: Using API Version  1
	I0229 01:40:25.371153  340304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:40:25.371494  340304 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:40:25.371693  340304 main.go:141] libmachine: (multinode-107035-m02) Calling .GetIP
	I0229 01:40:25.374205  340304 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:40:25.374641  340304 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:40:25.374672  340304 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:40:25.374804  340304 host.go:66] Checking if "multinode-107035-m02" exists ...
	I0229 01:40:25.375108  340304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:40:25.375169  340304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:40:25.391629  340304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45381
	I0229 01:40:25.392054  340304 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:40:25.392495  340304 main.go:141] libmachine: Using API Version  1
	I0229 01:40:25.392513  340304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:40:25.392873  340304 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:40:25.393055  340304 main.go:141] libmachine: (multinode-107035-m02) Calling .DriverName
	I0229 01:40:25.393245  340304 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 01:40:25.393268  340304 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHHostname
	I0229 01:40:25.395787  340304 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:40:25.396322  340304 main.go:141] libmachine: (multinode-107035-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:33:55", ip: ""} in network mk-multinode-107035: {Iface:virbr1 ExpiryTime:2024-02-29 02:39:00 +0000 UTC Type:0 Mac:52:54:00:f8:33:55 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:multinode-107035-m02 Clientid:01:52:54:00:f8:33:55}
	I0229 01:40:25.396361  340304 main.go:141] libmachine: (multinode-107035-m02) DBG | domain multinode-107035-m02 has defined IP address 192.168.39.26 and MAC address 52:54:00:f8:33:55 in network mk-multinode-107035
	I0229 01:40:25.396498  340304 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHPort
	I0229 01:40:25.396682  340304 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHKeyPath
	I0229 01:40:25.396937  340304 main.go:141] libmachine: (multinode-107035-m02) Calling .GetSSHUsername
	I0229 01:40:25.397079  340304 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-316644/.minikube/machines/multinode-107035-m02/id_rsa Username:docker}
	I0229 01:40:25.479040  340304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:40:25.494966  340304 status.go:257] multinode-107035-m02 status: &{Name:multinode-107035-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0229 01:40:25.495015  340304 status.go:255] checking status of multinode-107035-m03 ...
	I0229 01:40:25.495412  340304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0229 01:40:25.495457  340304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:40:25.511770  340304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I0229 01:40:25.512269  340304 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:40:25.512761  340304 main.go:141] libmachine: Using API Version  1
	I0229 01:40:25.512783  340304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:40:25.513099  340304 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:40:25.513276  340304 main.go:141] libmachine: (multinode-107035-m03) Calling .GetState
	I0229 01:40:25.514801  340304 status.go:330] multinode-107035-m03 host status = "Stopped" (err=<nil>)
	I0229 01:40:25.514821  340304 status.go:343] host is not running, skipping remaining checks
	I0229 01:40:25.514827  340304 status.go:257] multinode-107035-m03 status: &{Name:multinode-107035-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-107035 node start m03 --alsologtostderr: (27.763615963s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-107035 node delete m03: (1.021145375s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (447.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-107035 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0229 01:57:40.880576  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
E0229 01:59:09.040630  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 01:59:37.824947  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-107035 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m26.815841546s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107035 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (447.38s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-107035
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-107035-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-107035-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.748012ms)

                                                
                                                
-- stdout --
	* [multinode-107035-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-107035-m02' is duplicated with machine name 'multinode-107035-m02' in profile 'multinode-107035'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-107035-m03 --driver=kvm2  --container-runtime=crio
E0229 02:02:12.088140  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-107035-m03 --driver=kvm2  --container-runtime=crio: (48.395480534s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-107035
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-107035: exit status 80 (231.961355ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-107035
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-107035-m03 already exists in multinode-107035-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-107035-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.76s)

                                                
                                    
x
+
TestScheduledStopUnix (116.74s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-402918 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-402918 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.930906826s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-402918 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-402918 -n scheduled-stop-402918
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-402918 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-402918 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-402918 -n scheduled-stop-402918
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-402918
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-402918 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0229 02:09:09.039785  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-402918
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-402918: exit status 7 (76.754363ms)

                                                
                                                
-- stdout --
	scheduled-stop-402918
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-402918 -n scheduled-stop-402918
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-402918 -n scheduled-stop-402918: exit status 7 (75.856043ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-402918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-402918
--- PASS: TestScheduledStopUnix (116.74s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (215.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4137730014 start -p running-upgrade-546307 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4137730014 start -p running-upgrade-546307 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m55.985204758s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-546307 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-546307 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m35.287209109s)
helpers_test.go:175: Cleaning up "running-upgrade-546307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-546307
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-546307: (1.255144888s)
--- PASS: TestRunningBinaryUpgrade (215.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-424173 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-424173 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (94.376961ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-424173] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-424173 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-424173 --driver=kvm2  --container-runtime=crio: (1m38.11938563s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-424173 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-117441 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-117441 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (111.620137ms)

                                                
                                                
-- stdout --
	* [false-117441] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 02:09:35.018504  348095 out.go:291] Setting OutFile to fd 1 ...
	I0229 02:09:35.018655  348095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:09:35.018666  348095 out.go:304] Setting ErrFile to fd 2...
	I0229 02:09:35.018673  348095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 02:09:35.018895  348095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-316644/.minikube/bin
	I0229 02:09:35.019482  348095 out.go:298] Setting JSON to false
	I0229 02:09:35.020510  348095 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6718,"bootTime":1709165857,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 02:09:35.020579  348095 start.go:139] virtualization: kvm guest
	I0229 02:09:35.022584  348095 out.go:177] * [false-117441] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 02:09:35.023934  348095 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 02:09:35.023995  348095 notify.go:220] Checking for updates...
	I0229 02:09:35.025196  348095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 02:09:35.026508  348095 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-316644/kubeconfig
	I0229 02:09:35.027585  348095 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-316644/.minikube
	I0229 02:09:35.028869  348095 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 02:09:35.030075  348095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 02:09:35.031879  348095 config.go:182] Loaded profile config "NoKubernetes-424173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:09:35.032038  348095 config.go:182] Loaded profile config "force-systemd-env-540640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:09:35.032174  348095 config.go:182] Loaded profile config "offline-crio-395379": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0229 02:09:35.032282  348095 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 02:09:35.068583  348095 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 02:09:35.069687  348095 start.go:299] selected driver: kvm2
	I0229 02:09:35.069700  348095 start.go:903] validating driver "kvm2" against <nil>
	I0229 02:09:35.069712  348095 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 02:09:35.071722  348095 out.go:177] 
	W0229 02:09:35.072819  348095 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0229 02:09:35.073887  348095 out.go:177] 

                                                
                                                
** /stderr **
E0229 02:09:37.825066  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
net_test.go:88: 
----------------------- debugLogs start: false-117441 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-117441

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-117441

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-117441

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-117441

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-117441

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-117441

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-117441

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-117441

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-117441

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-117441

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-117441

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-117441" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-117441" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-117441

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-117441"

                                                
                                                
----------------------- debugLogs end: false-117441 [took: 3.085041105s] --------------------------------
helpers_test.go:175: Cleaning up "false-117441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-117441
--- PASS: TestNetworkPlugins/group/false (3.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (45.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-424173 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-424173 --no-kubernetes --driver=kvm2  --container-runtime=crio: (44.564213448s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-424173 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-424173 status -o json: exit status 2 (279.420544ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-424173","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-424173
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-424173: (1.070364978s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (45.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (139.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2882929898 start -p stopped-upgrade-745961 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2882929898 start -p stopped-upgrade-745961 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m33.077223992s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2882929898 -p stopped-upgrade-745961 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2882929898 -p stopped-upgrade-745961 stop: (2.123337083s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-745961 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-745961 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.769498803s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (139.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (52.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-424173 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-424173 --no-kubernetes --driver=kvm2  --container-runtime=crio: (52.04996243s)
--- PASS: TestNoKubernetes/serial/Start (52.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-424173 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-424173 "sudo systemctl is-active --quiet service kubelet": exit status 1 (228.95516ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (16.209148607s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.407540267s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-424173
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-424173: (1.565816726s)
--- PASS: TestNoKubernetes/serial/Stop (1.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-424173 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-424173 --driver=kvm2  --container-runtime=crio: (22.965062942s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-745961
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-424173 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-424173 "sudo systemctl is-active --quiet service kubelet": exit status 1 (219.937298ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestPause/serial/Start (106.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-060637 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-060637 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m46.438590398s)
--- PASS: TestPause/serial/Start (106.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m41.481825398s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.48s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-060637 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-060637 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.522599067s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m9.476980917s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-117441 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-117441 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nd6wg" [6ac70c8e-e95e-4539-9ce9-0bb7abf2d6c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nd6wg" [6ac70c8e-e95e-4539-9ce9-0bb7abf2d6c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006024022s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-117441 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestPause/serial/Pause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-060637 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-060637 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-060637 --output=json --layout=cluster: exit status 2 (259.982806ms)

                                                
                                                
-- stdout --
	{"Name":"pause-060637","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-060637","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-060637 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-060637 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-060637 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (12.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (12.40175223s)
--- PASS: TestPause/serial/VerifyDeletedResources (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (107.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m47.603409282s)
--- PASS: TestNetworkPlugins/group/calico/Start (107.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (122.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (2m2.562965944s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (122.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (142.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m22.286287599s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (142.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vfzn2" [06354fae-d5e6-459b-aa3f-0366ff5391bc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006518049s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-117441 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-117441 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8vjqc" [ce0ce828-48eb-4e21-a796-0818b05ebcfc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8vjqc" [ce0ce828-48eb-4e21-a796-0818b05ebcfc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.004951402s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-117441 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (98.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0229 02:18:52.088432  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
E0229 02:19:09.040411  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/functional-921098/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m38.350744875s)
--- PASS: TestNetworkPlugins/group/flannel/Start (98.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jxqw8" [efb0cfe0-efec-44f5-8665-3052a815d94c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006330185s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-117441 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-117441 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-p2wv7" [e72c14d0-26ab-4aed-b48d-f539688ff815] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-p2wv7" [e72c14d0-26ab-4aed-b48d-f539688ff815] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.211272793s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-117441 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-117441 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-117441 replace --force -f testdata/netcat-deployment.yaml: (2.544779088s)
E0229 02:19:37.825491  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/addons-600097/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hj5z6" [47a7262f-0b29-4138-be11-de44c52e078f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hj5z6" [47a7262f-0b29-4138-be11-de44c52e078f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00745483s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-117441 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-117441 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (100.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-117441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m40.433802098s)
--- PASS: TestNetworkPlugins/group/bridge/Start (100.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-117441 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-117441 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5xz8m" [1b3d41c8-8e62-43fd-a190-9f5bd93fb388] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5xz8m" [1b3d41c8-8e62-43fd-a190-9f5bd93fb388] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.007900293s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ddx5r" [f1fa288e-ed72-4c3e-822c-741167b82aaf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006315965s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-117441 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-117441 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-117441 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-psszx" [96eb59f8-e836-41ef-b4ab-3fd7df9e50ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-psszx" [96eb59f8-e836-41ef-b4ab-3fd7df9e50ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004403252s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-117441 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (125.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-247751 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-247751 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m5.370375382s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (125.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (105.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-915633 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-915633 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m45.117381265s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (105.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-117441 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-117441 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7bz56" [8de97130-c0d5-40b5-a2d1-78d559c24668] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7bz56" [8de97130-c0d5-40b5-a2d1-78d559c24668] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006312306s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-117441 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-117441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-071485 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0229 02:22:06.622708  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
E0229 02:22:09.183761  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
E0229 02:22:14.304164  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
E0229 02:22:24.545080  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
E0229 02:22:45.025579  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-071485 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m37.693877174s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-915633 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3d069c34-3c34-4c30-8698-681e749d7fa4] Pending
helpers_test.go:344: "busybox" [3d069c34-3c34-4c30-8698-681e749d7fa4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3d069c34-3c34-4c30-8698-681e749d7fa4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004508698s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-915633 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-247751 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [22d0d5e3-3658-4122-adf1-8faffa8de817] Pending
helpers_test.go:344: "busybox" [22d0d5e3-3658-4122-adf1-8faffa8de817] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [22d0d5e3-3658-4122-adf1-8faffa8de817] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004632207s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-247751 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-915633 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-915633 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.18364699s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-915633 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-247751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-247751 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-071485 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [41887cd0-661c-4640-b347-f44cca76598a] Pending
helpers_test.go:344: "busybox" [41887cd0-661c-4640-b347-f44cca76598a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [41887cd0-661c-4640-b347-f44cca76598a] Running
E0229 02:23:51.350545  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005340769s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-071485 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-071485 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-071485 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.085441728s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-071485 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (653.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-915633 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-915633 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (10m53.393926837s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-915633 -n embed-certs-915633
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (653.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (596.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-247751 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0229 02:25:40.606091  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/calico-117441/client.crt: no such file or directory
E0229 02:25:42.376495  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:25:48.340695  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
E0229 02:25:54.232240  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/kindnet-117441/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-247751 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (9m56.406407214s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-247751 -n no-preload-247751
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (596.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (873.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-071485 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-071485 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (14m32.971501134s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071485 -n default-k8s-diff-port-071485
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (873.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-275488 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-275488 --alsologtostderr -v=3: (1.246176865s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-275488 -n old-k8s-version-275488: exit status 7 (75.984394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-275488 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (56.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-052502 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0229 02:50:21.894800  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/enable-default-cni-117441/client.crt: no such file or directory
E0229 02:50:27.859118  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/flannel-117441/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-052502 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (56.440686031s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (56.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-052502 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-052502 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.479178014s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-052502 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-052502 --alsologtostderr -v=3: (11.130563052s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-052502 -n newest-cni-052502
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-052502 -n newest-cni-052502: exit status 7 (77.933873ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-052502 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (46.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-052502 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0229 02:51:36.515429  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/bridge-117441/client.crt: no such file or directory
E0229 02:52:04.061802  323885 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-316644/.minikube/profiles/auto-117441/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-052502 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (46.481448044s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-052502 -n newest-cni-052502
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (46.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-052502 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-052502 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-052502 --alsologtostderr -v=1: (1.070607384s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-052502 -n newest-cni-052502
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-052502 -n newest-cni-052502: exit status 2 (243.75753ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-052502 -n newest-cni-052502
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-052502 -n newest-cni-052502: exit status 2 (243.654335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-052502 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-052502 -n newest-cni-052502
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-052502 -n newest-cni-052502
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.82s)

                                                
                                    

Test skip (39/309)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
144 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
150 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
230 TestChangeNoneUser 0
233 TestScheduledStopWindows 0
235 TestSkaffold 0
237 TestInsufficientStorage 0
241 TestMissingContainerUpgrade 0
246 TestNetworkPlugins/group/kubenet 3.34
255 TestNetworkPlugins/group/cilium 3.65
271 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-117441 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-117441

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-117441

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-117441

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-117441

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-117441

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-117441

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-117441

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-117441

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-117441

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-117441

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-117441

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-117441" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-117441" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-117441

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-117441"

                                                
                                                
----------------------- debugLogs end: kubenet-117441 [took: 3.184552037s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-117441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-117441
--- SKIP: TestNetworkPlugins/group/kubenet (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-117441 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-117441" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-117441

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-117441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-117441"

                                                
                                                
----------------------- debugLogs end: cilium-117441 [took: 3.497384136s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-117441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-117441
--- SKIP: TestNetworkPlugins/group/cilium (3.65s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-542968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-542968
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard